Archive

Archive for the ‘Information Security’ Category

Incomplete Thought: The Time Is Now For OCP-like White Box Security Appliances

January 25th, 2015 3 comments

Over the last couple of years, we’ve seen some transformative innovation erupt in networking.

In no particular order OR completeness:

  • CLOS architectures and protocols are evolving
  • the debate over Ethernet and IP fabrics is driving toward the outcome that we need both
  • x86 is finding a home in networking at increasing levels of throughput thanks to things like DPDK and optimized IP stacks
  • merchant silicon, FPGA and ASICs are seeing increased investment as the speeds/feeds move from 10 > 40 > 100 Gb/s per NIC
  • programmable abstraction and the operational models to support it has been proven at scale
  • virtualization and virtualized services are now common place architectural primitives in discussions for NG networking
  • Open Source is huge in both orchestration as well as service delivery
  • Entirely new network operating systems like that of Cumulus have emerged to challenge incumbents
  • SDN, NFV and overlays are starting to see production at-scale adoption beyond PoCs
  • automation is starting to take root for everything from provisioning to orchestration to dynamic service insertion and traffic steering

Stir in the profound scale-out requirements of mega-scale web/cloud providers and the creation and adoption of Open Compute Platform compliant network, storage and compute platforms, and there’s a real revolution going on:

The Open Compute Networking Project is creating a set of technologies that are disaggregated and fully open, allowing for rapid innovation in the network space. We aim to facilitate the development of network hardware and software – together with trusted project validation and testing – in a truly open and collaborative community environment.

We’re bringing to networking the guiding principles that OCP has brought to servers & storage, so that we can give end users the ability to forgo traditional closed and proprietary network switches – in favor of a fully open network technology stack. Our initial goal is to develop a top-of-rack (leaf) switch, while future plans target spine switches and other hardware and software solutions in the space.

Now, interestingly, while there are fundamental shifts occurring in the approach to and operations of security — the majority of investment in which is still network-centric — as an industry, we are still used to buying our security solutions as closed appliances or chassis form-factors from vendors with integrated hardware and software.

While vendors offer virtualized versions of their hardware solutions as virtual appliances that can also run on bare metal, they generally have not enjoyed widespread adoption because of the operational challenges involved with the operationally-siloed challenges involved in distinguishing the distribution of security as a service layer across dedicated appliances or across compute fabrics as an overlay.

But let’s just agree that outside of security, software is eating the world…and that at some point, the voracious appetite of developers and consumers will need to be sated as it relates to security.

Much of the value (up to certain watermark levels of performance and latency) of security solutions is delivered via software which when coupled with readily-available hardware platforms such as x86 with programmable merchant silicon, can provide some very interesting and exciting solutions at a much lower cost.

So why then, like what we’ve seen with networking vendors who have released OCP-compliant white-box switching solutions that allow end-users to run whatever software/NOS they desire, have we not seen the same for security?

I think it would be cool to see an OCP white box spec for security and let the security industry innovate on the software to power it.

You?

/Hoff

 

 

 

J-Law Nudie Pics, Jeremiah, Privacy and Dropbox – An Epic FAIL of Mutual Distraction

September 2nd, 2014 No comments

onedoesnotFrom the “It can happen to anyone” department…

A couple of days ago, prior to the announcement that hundreds of celebrities’ nudie shots were liberated from their owners and posted to the Web, I customized some Growl notifications on my Mac to provide some additional realtime auditing of some apps I was interested in.  One of the applications I enabled was Dropbox synch messaging so I could monitor some sharing activity.

Ordinarily, these two events would not be related except I was also tracking down a local disk utilization issue that was vexing me as on a day-to-day basis as my local SSD storage would ephemerally increase/decrease by GIGABYTES and I couldn’t figure out why.

So this evening, quite literally as I was reading RSnake’s interesting blog post titled “So your nude selfies were just hacked,” a Growl notification popped up informing me that several new Dropbox files were completing synchronization.

Puzzled because I wasn’t aware of any public shares and/or remote folders I was synching, I checked the Dropbox synch status and saw a number of files that were unfamiliar — and yet the names of the files certainly piqued my interest…they appeared to belong to a very good friend of mine given their titles. o_O

I checked the folder these files were resting in — gigabytes of them — and realized it was a shared folder that I had setup 3 years ago to allow a friend of mine to share a video from one of our infamous Jiu Jitsu smackdown sessions from the RSA Security Conference.  I hadn’t bothered to unshare said folder for years, especially since my cloud storage quota kept increasing while my local storage didn’t.

As I put 1 and 1 together, I realized that for at least a couple of years, Jeremiah (Grossman) had been using this dropbox share folder titled “Dropit” as a repository for file storage, thinking it was HIS!

This is why gigs of storage was appearing/disappearing from my local storage when he added/removed files, but I didn’t  see the synch messages and thus didn’t see the filenames.

I jumped on Twitter and engaged Jer in a DM session (see below) where I was laughing so hard I was crying…he eventually called me and I walked him through what happened.

Once we came to terms of what had happened, how much fun I could have with this, Jer ultimately copied the files off the share and I unshared the Dropbox folder.

We agreed it was important to share this event because like previous issues each of us have had, we’re all about honest disclosure so we (and others) can learn from our mistakes.

The lessons learned?

  1. Dropbox doesn’t make it clear whether a folder that’s shared and mounted is yours or someone else’s — they look the same.
  2. Ensure you know where your data is synching to!  Services like Dropbox, iCloud, Google Drive, SkyDrive, etc. make it VERY easy to forget where things are actually stored!
  3. Check your logs and/or enable things like Growl notifications (on the Mac) to ensure you can see when things are happening
  4. Unshare things when you’re done.  Audit these services regularly.
  5. Even seasoned security pros can make basic security/privacy mistakes; I shared a folder and didn’t audit it and Jer put stuff in a folder he thought was his.  It wasn’t.
  6. Never store nudie pics on a folder you don’t encrypt — and as far as I can tell, Jer didn’t…but I DIDN’T CLICK…HONEST!

Jer and I laughed our asses off, but imagine if this had been confidential information or embarrassing pictures and I wasn’t his friend.

If you use Dropbox or similar services, please pay attention.

I don’t want to see your junk.

/Hoff

P.S. Thanks for being a good sport, Jer.

P.P.S. I about died laughing sending these DMs:

Jer-Twitter

 

The Curious Case Of Continuous and Consistently Contiguous Crypto…

August 8th, 2013 9 comments

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

/Hoff

P.S. Remember back in the 80’s/90’s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta

Incomplete Thought: The Psychology Of Red Teaming Failure – Do Not Pass Go…

August 6th, 2013 14 comments
team fortress red team

team fortress red team (Photo credit: gtrwndr87)

I could probably just ask this of some of my friends — many of whom are the best in the business when it comes to Red Teaming/Pen Testing, but I thought it would be an interesting little dialog here, in the open:

When a Red Team is engaged by an entity to perform a legally-authorized pentest (physical or electronic) with an explicit “get out of jail free card,” does that change the tactics, strategy and risk appetite of the team were they not to have that parachute?

Specifically, does the team dial-up or dial-down the aggressiveness of the approach and execution KNOWING that they won’t be prosecuted, go to jail, etc.?

Blackhats and criminals operating outside this envelope don’t have the luxury of counting on a gilded escape should failure occur and thus the risk/reward mapping *might* be quite different.

To that point, I wonder what the gap is between an authorized Red Team action versus those that have everything to lose?  What say ye?

/Hoff

Enhanced by Zemanta

Incomplete Thought: In-Line Security Devices & the Fallacies Of Block Mode

June 28th, 2013 16 comments

blockadeThe results of a long-running series of extremely scientific studies has produced a Metric Crapload™ of anecdata.

Namely, hundreds of detailed discussions (read: lots of booze and whining) over the last 5 years has resulted in the following:

Most in-line security appliances (excluding firewalls) with the ability to actively dispose of traffic — services such as IPS, WAF, Anti-malware — are deployed in “monitor” or “learning” mode are rarely, if ever, enabled with automated blocking.  In essence, they are deployed as detective versus preventative security services.

I have many reasons compiled for this.

I am interested in hearing whether you agree/disagree and your reasons for such.

/Hoff

Enhanced by Zemanta

NIST’s Trusted Geolocation in the Cloud: PoC Implementation

December 22nd, 2012 3 comments

I was very interested and excited to learn what NIST researchers and staff had come up with when I saw the notification of the “Draft Interagency Report 7904, Trusted Geolocation in the Cloud: Proof of Concept Implementation.”

It turns out that this report is an iteration on the PoC previously created by VMware, Intel and RSA back in 2010 which utilized Intel’s TXT, VMWare’s virtualization platform and the RSA/Archer GRC platform, as this one does.

I haven’t spent much time to look at the differences, but I’m hoping as I read through it that we’ve made progress…

You can read about the original PoC here, and watch a video from 2010 about it here.  Then you can read about it again in its current iteration, here (PDF.)

I wrote about this topic back in 2009 and still don’t have a good firm answer to the question I asked in 2009 in a blog titled “Quick Question: Any Public Cloud Providers Using Intel TXT?” and the follow-on “More On High Assurance (via TPM) Cloud Environments

At CloudConnect 2011 I also filmed a session with the Intel/RSA/VMware folks titled “More On Cloud and Hardware Root Of Trust: Trusting Cloud Services with Intel® TXT

I think this is really interesting stuff and a valuable security and compliance capability, but is apparently still hampered with practical deployment challenges.

I’m also confused as to why RSA employees were not appropriately attributed under the NIST banner and this is very much a product-specific/vendor-specific set of solutions…I’m not sure I’ve ever seen a NIST-branded report like this.

At any rate, I am interested to see if we will get to the point where these solutions will have more heterogeneous uptake across platforms.

/Hoff

Enhanced by Zemanta

On Puppy Farm Vendors, Petco and The Remarkable Analog To Security Consultancies/Integrators…

December 5th, 2012 No comments
Funny Attention Dogs And Owners Sign

Funny Attention Dogs And Owners Sign (Photo credits: www.dogpoopsigns.com)

Imagine you are part of a company in the “Pet Industry.”  Let’s say dogs, specifically.

Imagine further that regardless of whether you work on the end that feeds the dog, provides services focused on grooming the dog, sells accessories for the dog, actually breeds and raises the dog or deals with cleaning up what comes out the other end of the dog, that you also simultaneously spend your time offering your opinions on how much you despise the dog industry.

Hmmmm.

Now, either you’re being refreshingly honest, or you’re simply being shrewd about which end of the mutt you’re targeting your services toward — and sometimes it’s both ends and the middle — but you’re still a part of the dog industry.

And we all know it’s a dog-eat-dog world…in the Pet business as it is in the Security business.  Which ironically illustrates the cannibalistic nature of being in the security industry whilst trying to distance oneself by juxtaposing the position of the security community.

Claiming to be a Dog Whisperer in an industry of other aimless people shouting and clapping loudly whilst looking to perpetuate bad dog-breeding practices so they can sell across the supply chain is an interesting tactic.  However, yelling “BAD DOG!” and wondering why it continues to eat your slippers doesn’t change behavior.

You can’t easily dismantle and industry but you can offer better training, solutions or techniques to make a difference.

Either way, there’s a lot of tail wagging and crap to clean up.

Lots to consider in this little analog.  For everyone.

/Hoff

P.S. @bmkatz points us all to this amazing resource you may find useful.

Enhanced by Zemanta

Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 15 comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself 🙂

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting 🙂

Related articles

 

Enhanced by Zemanta

Quick Quip: Capability, Reliability and Liability…Security Licensing

November 28th, 2012 9 comments

Earlier today, I tweeted the following and was commented on by Dan Kaminsky (@dakami):

…which I explained with:

This led to a very interesting comment by Preston Wood who suggested something very interesting from the perspective of both leadership and accountability:

…and that brought forward another insightful comment:

Pretty interesting, right? Engineers, architects, medical practitioners, etc. all have degrees/licenses and absorb liability upon failure. What about security?

What do you think about this concept?

/Hoff

 

Should/Can/Will Virtual Firewalls Replace Physical Firewalls?

October 15th, 2012 6 comments
Simulação da participação de um Firewall entre...

Simulação da participação de um Firewall entre uma LAN e uma WAN Français : Schéma d’un pare-feu entre un LAN et un WAN (Photo credit: Wikipedia)

“Should/Can/Will Virtual Firewalls Replace Physical Firewalls?”

The answer is, as always, “Of course, but not really, unless maybe, you need them to…” 🙂

This discussion crops up from time-to-time, usually fueled by a series of factors which often lack the context to appropriately address it.

The reality is there exists the ever-useful answer of “it depends,” and frankly it’s a reasonable answer.

Back in 2008 when I created “The Four Horsemen of the Virtualization Security Apocalypse” presentation, I highlighted the very real things we needed to be aware of as we saw the rapid adoption of server virtualization…and the recommendations from virtualization providers as to the approach we should take in terms of securing the platforms and workloads atop them.  Not much has changed in almost five years.

However, each time I’m asked this question, I inevitably sound evasive when asking for more detail when the person doing the asking references “physical” firewalls and what it is they mean.  Normally the words “air-gap” are added to the mix.

The very interesting thing about how people answer this question is that in reality, the great many firewalls that are deployed today have the following features deployed in common:

  1. Heavy use of network “LAG” (link aggregation group) interface bundling/VLAN trunking and tagging
  2. Heavy network virtualization used, leveraging VLANs as security boundaries, trunked across said interfaces
  3. Increased use of virtualized contexts and isolated resource “virtual systems” and separate policies
  4. Heavy use of ASIC/FPGA and x86 architectures which make use of shared state tables, memory and physical hardware synced across fabrics and cluster members
  5. Predominant use of “stateful inspection” at layers 2-4 with the addition of protocol decoders at L5-7 for more “application-centric” enforcement
  6. Increasing use of “transparent proxies” at L2 but less (if any) full circuit or application proxies in the classic sense

So before I even START to address the use cases of the “virtual firewalls” that people reference as the comparison, nine times out of ten, that supposed “air gap” with dedicated physical firewalls that they reference usually doesn’t compute.

Most of the firewall implementations that people have meet most of the criteria mentioned in items 1-6 above.

Further, most firewall architectures today aren’t running full L7 proxies across dedicated physical interfaces like in the good old days (Raptor, etc.) for some valid reasons…(read the postscript for an interesting prediction.)

Failure domains and the threat modeling that illustrates cascading impact due to complexity, outright failure or compromised controls is usually what people are interested in when asking this question, but this gets almost completely obscured by the “physical vs. virtual” concern and we often never dig deeper.

There are some amazing things that can be done in virtual constructs that we can’t do in the physical and there are some pretty important things that physical firewalls can provide that virtual versions have trouble with.  It’s all a matter of balance, perspective, need, risk and reward…oh, and operational simplicity.

I think it’s important to understand what we’re comparing when asking that question before we conflate use cases, compare and mismatch expectations, and make summary generalizations (like I just did 🙂 about that which we are contrasting.

I’ll actually paint these use cases in a follow-on post shortly.

/Hoff

POSTSCRIPT:

I foresee that we will see a return of the TRUE application-level proxy firewall — especially with application identification, cheap hardware, more security and networking virtualized in hardware.  I see this being deployed both on-premise and as part of a security as a service offering (they are already, today — see CloudFlare, for example.)

If you look at the need to terminate SSL/TLS and provide for not only L4-L7 sanity, protect applications/sessions at L5-7 (web and otherwise) AND the renewed dependence upon XML, SOAP, REST, JSON, etc., it will drive even more interesting discussions in this space.  Watch as the hybrid merge of the WAF+XML security services gateway returns to vogue… (see also Cisco EOLing ACE while I simultaneously receive an email from Intel informing me I can upgrade to their Intel Expressway Service Gateway…which I believe (?) was from the Cervega Sarvega acqusition?)

Enhanced by Zemanta