Archive

Archive for the ‘Application Security’ Category

Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 15 comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself :)

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting :)

Related articles

 

Enhanced by Zemanta

Elemental: Leveraging Virtualization Technology For More Resilient & Survivable Systems

June 21st, 2012 Comments off

Yesterday saw the successful launch of Bromium at Gigamon’s Structure conference in San Francisco.

I was privileged to spend some stage time with Stacey Higginbotham and Simon Crosby (co-founder, CTO, mentor and good friend) on stage after Simon’s big reveal of Bromium‘s operating model and technology approach.

While product specifics weren’t disclosed, we spent some time chatting about Bromium’s approach to solving a particularly tough set of security challenges with a focus on realistic outcomes given the advanced adversaries and attack methodologies in use today.

At the heart of our discussion* was the notion that in many cases one cannot detect let alone prevent specific types of attacks and this requires a new way of containing the impact of exploiting vulnerabilities (known or otherwise) that are as much targeting the human factor as they are weaknesses in underlying operating systems and application technologies.

I think Kurt Marko did a good job summarizing Bromium in his article here, so if you’re interested in learning more check it out. I can tell you that as a technology advisor to Bromium and someone who is using the technology preview, it lives up to the hype and gives me hope that we’ll see even more novel approaches of usable security leveraging technology like this.  More will be revealed as time goes on.

That said, with productization details purposely left vague, Bromium’s leveraged implementation of Intel’s VT technology and its “microvisor” approach brought about comments yesterday from many folks that reminded them of what they called “similar approaches” (however right/wrong they may be) to use virtualization technology and/or “sandboxing” to provide more “secure” systems.  I recall the following in passing conversation yesterday:

  • Determina (VMware acquired)
  • Green Borders (Google acquired)
  • Trusteer
  • Invincea
  • DeepSafe (Intel/McAfee)
  • Intel TXT w/MLE & hypervisors
  • Self Cleansing Intrusion Tolerance (SCIT)
  • PrivateCore (Newly launched by Oded Horovitz)
  • etc…

I don’t think Simon would argue that the underlying approach of utilizing virtualization for security (even for an “endpoint” application) is new, but the approach toward making it invisible and transparent from a user experience perspective certainly is.  Operational simplicity and not making security the user’s problem is a beautiful thing.

Here is a video of Simon and my session “Secure Everything.

What’s truly of interest to me — and based on what Simon said yesterday — the application of this approach could be just at home in a “server,” cloud or mobile application as it is on a classical desktop environment.  There are certainly dependencies (such as VT) today, but the notion that we can leverage virtualization for better resilience, survivability and assurance for more “trustworthy” systems is exciting.

I for one am very excited to see how we’re progressing from “bolt on” to more integrated approaches in our security models. This will bear fruit as we become more platform and application-centric in our approach to security, allowing us to leverage fundamentally “elemental” security components to allow for more meaningfully trustworthy computing.

/Hoff

* The range of topics was rather hysterical; from the Byzantine General’s problem to K/T Boundary extinction-class events to the Mexican/U.S. border fence, it was chock full of analogs ;)

 

Enhanced by Zemanta

The Classical DMZ Design Pattern: How To Kill Security In the Cloud

July 7th, 2010 6 comments

Every day I get asked to discuss how Cloud Computing impacts security architecture and what enterprise security teams should do when considering “Cloud.”

These discussions generally lend themselves to a bifurcated set of perspectives depending upon whether we’re discussing Public or Private Cloud Computing.

This is unfortunate.

From a security perspective, focusing the discussion primarily on the deployment model instead of thinking holistically about the “how, why, where, and who” of Cloud, often means that we’re tethered to outdated methodologies because it’s where our comfort zones are.

When we’re discussing Public Cloud, the security teams are starting to understand that the choice of compensating controls and how they deploy and manage them require operational, economic and architectural changes.  They are also coming to terms with the changes to application architectures as it relates to distributed computing and SOA-like implementation.  It’s uncomfortable and it’s a slow-slog forward (for lots of good reasons,) but good questions are asked when considering the security, privacy and compliance impacts of Public Cloud and what can/should be done about them and how things need to change.

When discussing Private Cloud, however, even when a “clean slate design” is proposed, the same teams tend to try to fall back to what they know and preserve the classical n-tier application architecture separated by physical or virtual compensating controls — the classical split-subnet DMZ or perimeterized model of “inside” vs “outside.” They can do this given the direct operational control exposed by highly-virtualized infrastructure.  Sometimes they’re also forced into this given compliance and audit requirements. The issue here is that this discussion centers around molding cloud into the shape of the existing enterprise models and design patterns.

This is an issue; trying to simultaneously secure these diametrically-opposed architectural implementations yields cost inefficiencies, security disparity, policy violations, introduces operational risk and generally means that  the ball doesn’t get moved forward in protecting the things that matter most.

Public Cloud Computing is a good thing for the security machine; it forces us to (again) come face-to-face with the ugliness of the problems of securing the things that matter most — our information. Private Cloud Computing — when improperly viewed from the perspective of simply preserving the status quo — can often cause stagnation and introduce roadblocks.  We’ve got to move beyond this.

Public Cloud speaks to the needs (and delivers on) agility, flexibility, mobility and efficiency. These are things that traditional enterprise security are often not well aligned with.  Trying to fit “Cloud” into neat and tidy DMZ “boxes” doesn’t work.  Cloud requires revisiting our choices for security. We should take advantage of it, not try and squash it.

/Hoff

Enhanced by Zemanta

CloudSQL – Accessing Datastores in the Sky using SQL…

December 2nd, 2008 5 comments
Zohosql-angled
I think this is definitely a precursor of things to come and introduces some really interesting security discussions to be had regarding the portability, privacy and security of datastores in the cloud.

Have you heard of Zoho?  No?  Zoho is a SaaS vendor that describe themselves thusly:

Zoho is a suite of online applications (services) that you sign up for and access from our Website. The applications are free for individuals and some have a subscription fee for organizations. Our vision is to provide our customers (individuals, students, educators, non-profits, small and medium sized businesses) with the most comprehensive set of applications available anywhere (breadth); and for those applications to have enough features (depth) to make your user experience worthwhile.

Today, Zoho announced the availability of CloudSQL which is middleware that allows customers who use Zoho's SaaS apps to "…access their data on Zoho SaaS
applications using SQL queries."
 

From their announcement:

Zoho CloudSQL is a technology that allows developers to interact with business data stored across Zoho Services using the familiar SQL language. In addition, JDBC and ODBC database drivers make writing code a snap – just use the language construct and syntax you would use with a local database instance. Using the latest Web technology no longer requires throwing away years of coding and learning.

Zoho CloudSQL allows businesses to connect and integrate the data and applications they have in Zoho with the data and applications they have in house, or even with other SaaS services. Unlike other methods for accessing data in the cloud, CloudSQL capitalizes on enterprise developers’ years of knowledge and experience with the widely‐used SQL language. This leads to faster deployments and easier (read: less expensive) integration projects.

Basically, CloudSQL is interposed between the suite of Zoho applications and the backend datastores and functions as an intermediary receiving SQL queries against the pooled data sets using standard SQL commands and dialects. Click on the diagram below for a better idea of what this looks like.

Zoho-cloud-sql
What's really interesting about allowing native SQL access is the ability to then allow much easier information interchange between apps/databases on an enterprises' "private cloud(s)" and the Zoho "public" cloud.

Further, it means that your data is more "portable" as it can be backed up, accessed, and processed by applications other than Zoho's.  Imagine if they were to extend the SQL exposure to other cloud/SaaS providers…this is where it will get really juicy. 

This sort of thing *will* happen.  Customers will see the absolute utility of exposing their cloud-based datastores and sharing them amongst business partners, much in the spirit of how it's done today, but with the datastores (or chunks of them) located off-premises.

That's all good and exciting, but obviously security questions/concerns immediately surface regarding such things as: authentication, encryption, access control, input sanitation, privacy and compliance…

Today our datastores typically live inside the fortress with multiple
layers of security and proxied access from applications, shielded from
direct access and yet we still have basic issues with attacks such as
SQL injection.  Imagine how much fun we can have with this!

The best I could find regarding security and Zoho came from their FAQ which doesn't exactly inspire confidence given the fact that they address logical/software security by suggesting that anti-virus software is the best line of defense ffor protecting your data and that "data encryption" will soon be offered as an "option" and (implied) SSL will make you secure:

6. Is my data secured?

Many people ask us this question. And rightly so; Zoho has invested alot of time and money to ensure that your information is secure and private. We offer security on multiple levels including the physical, software and people/process levels; In fact your data is more secure than walking around with it on a laptop or even on your corporate desktops.

Physical: Zoho servers and infrastructure are located in the most secure types of data centers that have multiple levels of restrictions for access including: on-premise security guards, security cameras, biometric limited access systems, and no signage to indicate where the buildings are, bullet proof glass, earthquake ratings, etc.

Hardware: Zoho employs state of the art firewall protection on multiple levels eliminating the possibility of intrusion from outside attacks

Logical/software protection: Zoho deploys anti-virus software and scans all access 24 x7 for suspicious traffic and viruses or even inside attacks; All of this is managed and logged for auditing purposes.

Process: Very few Zoho staff have access to either the physical or logical levels of our infrastructure. Your data is therefore secure from inside access; Zoho performs regular vulnerability testing and is constantly enhancing its security at all levels. All data is backed up on multiple servers in multiple locations on a daily basis. This means that in the worst case, if one data center was compromised, your data could be restored from other locations with minimal disruption. We are also working on adding even more security; for example, you will soon be able to select a "data encryption" option to encrypt your data en route to our servers, so that in the unlikely event of your own computer getting hacked while using Zoho, your documents could be securely encrypted and inaccessible without a "certificate" which generally resides on the server away from your computer.

Fun times ahead, folks.

/Hoff

Security Will Not End Up In the Network…

June 3rd, 2008 9 comments

Secdeadend
It’s not the destination, it’s the journey, stupid.

You can’t go a day without reading from the peanut gallery that it is
"…inevitable that network security will eventually be subsumed into
the network fabric."  I’m not picking on Rothman specifically, but he’s been banging this drum loudly of late.

For such a far-reaching, profound and prophetic statement, claims like these are strangely myopic and inaccurate..and then they’re exactly right.

Confused?

Firstly, it’s sort of silly and obvious to trumpet that "network security" will end up in the "network."  Duh.  What’s really meant is that "information security" will end up in the network, but that’s sort of goofy, too. You’ll even hear that "host-based security" will end up in the network…so let’s just say that what’s being angled at here is that security will end up in the network.

These statements are often framed within a temporal bracket
that simply ignores the bigger picture and reads like a eulogy.  The reality is that historically
we have come to accept that security and technology are
cyclic and yet we continue to witness these terminal predictions defining an end state for security that has never arrived and never will.


Let me make plain my point: there is no final resting place for where and how security will "end up."

I’m visual, so let’s reference a very basic representation of my point.  This graph represents the cyclic transition over time of where and how
we invest in security.

We ultimately transition between host-based security,
information-centric security and network security over time. 

We do this little
shuffle based upon the effectiveness and maturity of technology,
economics, cultural, societal and regulatory issues and the effects of disruptive innovation.  In reality, this
isn’t a smooth sine wave at all, it’s actually more a classic dampened
oscillation ala the punctuated equilibrium theory I’ve spoken about
before
, but it’s easier to visualize this way.

Youarehere_3

Our investment strategy and where security is seen as being "positioned" reverses direction over time and continues ad infinitum.  This has proven itself time and time again yet we continue to be wowed by the prophetic utterances of people who on the one hand talk about these never-ending cycles and yet on the other pretend they don’t exist by claiming the "death" of one approach over another. 
 

Why?

To answer that let’s take a look at how the cyclic pendulum effect of our focus on
security trends from the host to the information to the network and
back again by analyzing the graph above. 

  1. If we take a look at the arbitrary "starting" point indicated by the "You Are Here" dot on the sine wave above, I suggest that over the last 2-3 years or so we’ve actually headed away from the network as the source of all things security.   

    There are lots of reasons for this; economic, ideological, technological, regulatory and cultural.  If you want to learn more about this, check out my posts on how disruptive Innovation fuels strategic transience.

    In short, the network has not been able to (and never will) deliver the efficacy, capabilities or
    cost-effectiveness desired to secure us from evil, so instead we look at
    actually securing the information itself.  The security industry messaging of late is certainly bearing testimony to that fact.  Check out this year’s RSA conference…
     

  2. As we focus then on information centricity, we see the resurgence of ERM, governance and compliance come into focus.  As policies proliferate, we realize that this is really hard and we don’t have effective and ubiquitous data
    classification, policy affinity and heterogeneous enforcement capabilities.  We shake our heads at the ineffectiveness of the technology we have and hear the cries of pundits everywhere that we need to focus on the things that really matter…

    In order to ensure that we effectively classify data at the point of creation, we recognize that we can’t do this automagically and we don’t have standardized schemas or metadata across structured and unstructured data, so we’ll look at each other, scratch our heads and conclude that the applications and operating systems need modification to force fit policy, classification and enforcement.

    Rot roh.
     

  3. Now that we have the concept of policies and classification, we need the teeth to ensure it, so we start to overlay emerging technology solutions on the host in applications and via the OS’s that are unfortunately non-transparent and affect the users and their ability to get their work done.  This becomes labeled as a speed bump and we grapple with how to make this less impacting on the business since security has now slowed things down and we still have breaches because users have found creative ways of bypassing technology constraints in the name of agility and efficiency…
     
  4. At this point, the network catches up in its ability to process closer to "line
    speed," and some of the data classification functionality from the host commoditizes into the "network" — which by then is as much in the form of appliances as it is routers and switches — and always
    will be.   So as we round this upturn focusing again on being "information centric," with the help of technology, we seek to use our network investment to offset impact on our users.
     
  5. Ultimately, we get the latest round of "next generation" network solutions which promise to deliver us from our woes, but as we "pass go and collect $200" we realize we’re really at the same point we were at point #1.

‘Round and ’round we go.

So, there’s no end state.  It’s a continuum.  The budget and operational elements of who "owns" security and where it’s implemented simply follow the same curve.  Throw in disruptive innovation such as virtualization, and the entire concept of the "host" and the "network" morphs and we simply realize that it’s a shift in period on the same graph.

So all this pontification that it is "…inevitable that network security will eventually be subsumed into
the network fabric" is only as accurate as what phase of the graph you reckon you’re on.  Depending upon how many periods you’ve experienced, it’s easy to see how some who have not seen these changes come and go could be fooled into not being able to see the forest for the trees.

Here’s the reality we actually already know and should not come to you as a surprise if you’ve been reading my blog: we will always need a blended investment in technology, people and process in order to manage our risk effectively.  From a technology perspective, some of this will take the form of controls embedded in the information itself, some will come from the OS and applications and some will come from the network.

Anyone who tells you differently has something to sell you or simply needs a towel for the back of his or her ears…

/Hoff

What a Shocker, Stiennon & I Disagree: Arbor + Ellacoya Make Total Sense…

January 25th, 2008 3 comments

Rucy
"Common sense has nothing to do with it. When I say he’s wrong, he’s wrong." — Ethel Mertz, I Love Lucy.

What a surprise, I disagree totally with Richard Stiennon on his assessment of the value proposition regarding the acquisition of Ellacoya by Arbor Networks

Specifically, I find it hysterical that Richard claims that Arbor is "abandoning the security space."  Just the opposite, I believe Arbor — given what they do — is pursuing a course of action that will allow them to not only continue to cement their value proposition in the security space, but extend it further, both in the carrier and enterprise space.

I think that it comes down to what Richard defines as "security" — a term I obviously despise for reasons just like this.

Here’s we we diverge:

I was actually in Ann Arbor last week when news broke that Arbor
Networks had acquired Ellacoya a so called “deep packet inspection”
technology vendor. I was perplexed. That’s not security.

"That’s not security." Funny.  See below.

First let me clear up some terminology.   “Deep Packet Inspection” was the term some Gartner analyst
popularized to describe what content filtering gateways do. They
inspect content for worms, attacks, and viruses. Somewhere along the
line the traffic shaping industry(Ellacoya, Allot, Sandvine) co-opted
the term to describe what their devices do: look at the packet header
to determine what protocol is being transported and throttle the
throughput based on protocol. In other words Quality of Service for
network traffic. These devices do not look at payloads at all except in
some rare instances when you have to determine if Skype-like programs
are spoofing different protocols.

Firstly, Richard conveniently trivialized DPI.  DPI is certainly about inspecting the packet (beyond the header, by the way) and determining what protocol and application that is being used with precision and fidelity.   In a carrier network, that’s used for provisioning, network allocation, bandwidth management and service level management.

These are terms every enterprise of worth is used to hearing and managing to!

Certainly a disposition once the packets are profiled could be to apply QoS which is often what one might do in DoS/DDoS situations, but there are multiple benefits of being able to apply policies and enact dispositions which are dependent on the use or misuse of a specific application or protocol.

In fact, if you don’t think that this is "security" why do we see QoS/Rate limiting in almost every firewall platform today — it may not show up in the GUI, but this is a fundamental way of dealing with attack.

Oh, by the way Richard, perhaps you ought to read your own product manuals as Fortinet provides QoS as a "security" function…perhaps not as robustly as Ellacoya…and soon Arbor:

FortiGate Traffic Shaping Technical Note

The FortiGate Traffic Shaping Technical Note, available on the Technical Documentation Web Site,
discusses Quality of Service (QoS) and traffic shaping, describes
FortiGate traffic shaping using the token bucket filter mechanism, and
provides general procedures and tips on how to configure traffic
shaping on FortiGate firewalls.

FortiOS v3.0 MR1introduced inbound traffic shaping per
interface. For any FortiGate interface you can use the following
command to configure inbound traffic shaping for that interface.
Inbound traffic shaping limits the bandwidth accepted by the interface.


config system interface
   edit port2
      set inbandwidth 50
   end
end

This command limits the inbound traffic that the port2 interface
accepts to 50 Kb/sec. You can set inbound traffic shaping for any
FortiGate interface and for more than one FortiGate interface. Setting inbandwidth to 0 (the default) means unlimited bandwidth or no traffic shaping.

Inbound traffic shaping limits the amount of traffic accepted by the
interface. This limiting occurs before the traffic is processed by the
FortiGate unit. Limiting inbound traffic takes precedence over traffic
shaping applied using firewall policies.
This means that traffic
shaping applied by firewall policies is applied to traffic already
limited by inbound traffic shaping.

Lot of uses of the word "firewall" in the context of "traffic shaping" in that description…Here’s a link to your knowledge base, just in case you don’t have it ;)

Secondly, since availability is often a function of security, as an administrator I’d want to be able to craft a "security policy" that allows me to make sure that the stuff that matters to me most gets through and is prioritized as such and the rest fight for scraps.  Doing this with precision and fidelity is incredibly important whether you’re a carrier or an enterprise.  Oh, wait, here’s some more Fortinet documentation that seems to contradict the "that’s not security" sentiment:

Traffic shaping which is applied to a Firewall Policy, is enforced for
traffic which may flow in either direction. Therefore a session which
may be setup by an internal host to an external one, via a
Internal->External policy, will have Traffic shaping applied even if
the data stream is then coming from external to internal. For example,
an FTP ‘get’ or a SMTP server connecting to an external one, in order
to retrieve email.

Remember CHKP’s FloodGate?  Never particularly worked out from an integration perspective, but a good idea, nonetheless.  Cisco’s got it.  Juniper’s got it…

Further, there’s a new company — you may have heard about it — that takes application specificity and applies granular policies on traffic that does just this sort of thing: Palo Alto Networks.  They call it their "next generation firewall."  Love or hate the title (I don’t particularly care for it) I call it common sense.  Are you going to tell me this isn’t security, either? 

The next, next-generation of security devices will extend this decision-making criteria from ports/protocols through application "conduits" and start making decisions on content in context.  This is the natural extension of DPI.

I won’t argue with the rest of Richard’s points about M&A risk and market expansion because he’s right in many of his examples, but that wasn’t the title of his post or the real sentiment.

I think that this deal enhances both the capabilities and applicability of Arbor’s solutions which have been largely stovepiped and pigeonholed in the DDoS category based upon what they do today.  I hope they can execute on the integration play.

As to the notion of ignoring the enterprise and "doubling down on the carrier market," Arbor has a great DDoS product for both markets; this allows them now to take advantage of the cresting consolidation activity in both and start diversifying their SECURITY offerings in a way that is intelligently roadmapped.

Who knows.  Perhaps they’ll re-market the combined products as a "multiservices security gateway" just like Fortinet does with their carrier products (here.)

I think your marketing slip is showing, Rich.

/Hoff

Categories: Application Security Tags:

Grab the Popcorn: It’s the First 2008 “Ethical Security Marketing” (Oxymoron) Dust-Up…

January 5th, 2008 15 comments

Xsswormfap_2
Robert Hansen (RSnake / ha.ckers.org / SecTheory) created a little challenge (pun intended) a couple of days ago titled "The Diminutive XSS worm replication contest":

The diminutive XSS worm replication contest
is a week long contest to get some good samples of the smallest amount
of code necessary for XSS worm propagation. I’m not interested in
payloads for this contest, but rather, the actual methods of
propagation themselves. We’ve seen the live worm code
and all of it is muddied by obfuscation, individual site issues, and
the payload itself. I’d rather think cleanly about the most efficient
method for propagation where every character matters.

Kurt Wismer (anti-virus rants blog) thinks this is a lousy idea:

yes, folks… robert hansen (aka rsnake), the founder and ceo of
sectheory, felt it would be a good idea to hold a contest to see who
could create the smallest xss worm
ok, so there’s no money changing hands this time, but that doesn’t mean
the winner isn’t getting rewarded – there are absolutely rewards to be
had for the winner of a contest like this and that’s a big problem
because lots of people want rewards and this kind of contest will make
people think about and create xss worms when they wouldn’t have
before…

Here’s where Kurt diverges from simply highlighting nominal arguments of the potential for
misuse of the contest derivatives.  He suggests that RSnake is being
unethical and is encouraging this contest not for academic purposes, but rather to reap personal gain from it:

would you trust your security to a person who makes or made malware?
how about a person or company that intentionally motivates others to do
so? why do you suppose the anti-virus industry works so hard to fight
the conspiracy theories that suggest they are the cause of the viruses?
at the very least mr. hansen is playing fast and loose with the publics
trust and ultimately harming security in the process, but there’s a
more insidious angle too…

while the worms he’s soliciting from others are supposed to be merely
proof of concept, the fact of the matter is that proof of concept worms
can still cause problems (the recent orkut worm
was a proof of concept)… moreover, although the winner of the contest
doesn’t get any money, at the end of the day there will almost
certainly be a windfall for mr. hansen – after all, what do you suppose
happens when you’re one of the few experts on some relatively obscure
type of threat and that threat is artificially made more popular? well,
demand for your services goes up of course… this is precisely the
type of shady marketing model i described before
where the people who stand to gain the most out of a problem becoming
worse directly contribute to that problem becoming worse… it made
greg hoglund and jamie butler household names in security circles, and
it made john mcafee (pariah though he may be) a millionaire…

I think the following exchange in the comments section of the contest forum offers an interesting position from RSnake’s perspective:                   

Re: Diminutive XSS Worm Replication Contest

            

Posted by: Gareth Heyes (IP Logged)

      

Date: January 04, 2008 04:56PM

      

@rsnake

This contest is just asking for trouble :)

Are there any legal issues for creating such a worm in the uk?

————————————————————————————————————

 

Re: Diminutive XSS Worm Replication Contest

 

            

Posted by: rsnake (IP Logged)

      

Date: January 04, 2008 05:11PM

      

@Gareth Heyes – perhaps, but trouble is my middle name. So is danger.
Actually I have like 40 middle names it turns out. ;) No, I’m not
worried, this is academic – it won’t work anywhere without modification
of variables, and has no payload. The goal is to understand worm
propagation and get to the underlying important pieces of code.

I’m not in the UK and am not a lawyer so I can’t comment on the
laws. I’m not suggesting anyone should try to weaponize the code (they
could already do that with the existing worm code if they wanted anyway).

So, we’ve got Wismer’s perspective and (indirectly) RSnake’s. 

What’s yours?  Do you think holding a contest to build a POC for a worm a good idea?  Do the benefits of research and understanding the potential attacks so one can defend against them outweigh the potential for malicious use?  Do you think there are, or will be, legal ramifications from these sorts of activities?

/Hoff

On-Demand SaaS Vendors Able to Secure Assets Better than Customers?

August 16th, 2007 4 comments

Puzzle
I’m a big advocate of software as a service (SaaS) — have been for years.  This evangelism started for me almost 5 years ago when I become a Qualys MSSP customer listening to Philippe Courtot espouse the benefits of SaaS for vulnerability management.  This was an opportunity to allow me to more efficiently, effectively and cheaply manage my VA problem.  They demonstrated how they were good custodians of the data (my data) that they housed and how I could expect they would protect it.

I did not, however, feel *more* secure because they housed my VA data.  I felt secure enough that how they housed it should not fall into the wrong hands.  It’s called an assessment of risk and exposure.  I performed it and was satisfied it matched my company’s appetite and business requirements.

Not one to appear unclear on where I stand, I maintain that the SaaS can bring utility, efficiency, cost effectiveness, enhanced capabilities and improved service levels to a corporation depending upon who, what, why, how, where and when the service is deployed.  Sometimes it can bring a higher level of security to an organization, but so can an armed squadron of pissed off armed Oompa Loompa’s — it’s all a matter of perspective.

Oompa
I suggest that attempting to qualify the benefits of SaaS by generalizing in any sense is, well, generally a risky thing to do.  It often turns what could be a valid point of interest into a point of contention.

Such is the case with a story I read in a UK edition of IT Week by Phil Muncaster titled "On Demand Security Issues Raised."  In this story, the author describes the methods in which the security posture of SaaS vendors may be measured, comparing the value, capabilities and capacity of the various options and the venue for evaluating an SaaS MSSP:  hire an external contractor or rely on the MSSP to furnish you the results of an internally generated assessment.

I think this is actually a very useful and valid discussion to have — whom to trust and why?  In many cases, these vendors house sensitive and sometimes confidential data regarding an enterprise, so security is paramount.  One would suggest that anyone looking to engage an MSSP of any sort, especially one offering a critical SaaS, would perform due diligence in one form or another before signing on the dotted line.

That’s not really what I wanted to discuss, however.

What I *did* want to address was the comment in the article coming from Andy Kellett, an analyst for Burton, that read thusly:

"Security is probably less a problem than in the end-user organisations
because [on-demand app providers] are measured by the service they provide,"
Kellett argued.

I *think* I probably understand what he’s saying here…that security is "less of a problem" for an MSSP because the pressures of the implied penalties associated with violating an SLA are so much more motivating to get security "right" that they can do it far more effectively, efficiently and better than a customer.

This is a selling point, I suppose?  Do you, dear reader, agree?  Does the implication of outsourcing security actually mean that you "feel" or can prove that you’re more secure or better secured than you could do yourself by using a SaaS MSSP?

"I don’t agree the end-user organisation’s pen tester of choice
should be doing the testing. The service provider should do it and make that
information available."

Um, why?  I can understand not wanting hundreds of scans against my service in an unscheduled way, but what do you have to hide?  You want me to *trust* you that you’re more secure or holding up your end of the bargain?  Um, no thanks.  It’s clear that this person has never seen the results of an internally generated PenTest and how real threats can be rationalized away into nothingness…

Clarence So of Salesforce.com
agreed, adding that most chief information officers today understand that
software-as-a-service (SaaS) vendors are able to secure data more effectively
than they can themselves.

Really!?  It’s not just that they gave into budget pressures, agreed to transfer the risk and reduce OpEx and CapEx?  Care to generalize more thoroughly, Clarence?  Can you reference proof points for me here?  My last company used Salesforce.com, but as the person who inherited the relationship, I can tell you that I didn’t feel at all more "secure" because SF was hosting my data.  In fact, I felt more exposed.

"I’m sure training companies have their own motives for advocating the need
for in-house skills such as penetration testing," he argued. "But any
suggestions the SaaS model is less secure than client-server software are well
wide of the mark."

…and any suggestion that they are *more* secure is pure horsecock marketing at its finest.  Prove it.  And please don’t send me your SAS-70 report as your example of security fu.

So just to be clear, I believe in SaaS.  I encourage its use if it makes good business sense.  I don’t, however, agree that you will automagically be *more* secure.  You maybe just *as* secure, but it should be more cost-effective to deploy and manage.  There may very well be cases (I can even think of some) where one could be more or less secure, but I’m not into generalizations.

Whaddya think?

/Hoff

Secure Services in the Cloud (SSaaS/Web2.0) – InternetOS Service Layers

July 13th, 2007 2 comments

Internet
The last few days of activity involving Google and Microsoft have really catalyzed some thinking and demonstrated some very intriguing indicators as to how the delivery of applications and services is dramatically evolving. 

I don’t mean the warm and fuzzy marketing fluff.  I mean some real anchor technology investments by the big-boys putting their respective stakes in the ground as they invest hugely in redefining their business models to setup for the future.

Enterprises large and small are really starting to pay attention to the difference between infrastructure and architecture and this has a dramatic effect on the service providers and supply chain who interact with them.

It’s become quite obvious that there is huge business value associated with divorcing the need for "IT" to focus on physically instantiating and locating "applications" on "boxes" and instead  delivering "services" with the Internet/network as the virtualized delivery mechanism.

Google v. Microsoft – Let’s Get Ready to Rumble!

My last few posts on Google’s move to securely deliver a variety of applications and services represents the uplift of the "traditional" perspective of backoffice SaaS offerings such as Salesforce.com but also highlights the migration of desktop applications and utility services to the "cloud" also.

This is really executing on the vision of the thin-client Internet-centric vision from back in the day o’ the bubble when we saw a ton of Internet-borne services such as storage, backup, etc.  using the "InternetOS" as the canvas for service.

So we’ve talked about Google.  I maintain that their strategy is to ultimately take on Microsoft — including backoffice, utility and desktop applications.  So let’s look @ what the kids from Redmond are up to.

What Microsoft is developing towards with their vision of CloudOS was just recently expounded upon by one Mr. Ballmer.

Not wanting to lose mindshare or share of wallet, Microsoft is maneuvering to give the customer control over how they want to use applications and more importantly how they might be delivered.  Microsoft Live bridges the gap between the traditional desktop and puts that capability into the "cloud."

Let’s explore that a little:

In addition to making available its existing services, such as mail and
instant messaging, Microsoft also will create core infrastructure
services, such as storage and alerts, that developers can build on top
of. It’s a set of capabilities that have been referred to as a "Cloud OS," though it’s not a term Microsoft likes to use publicly.

Late last month, Microsoft introduced two new Windows Live Services,
one for sharing photos and the other for all types of files. While
those services are being offered directly by Microsoft today, they
represent the kinds of things that Microsoft is now promising will be
also made available to developers.

Among the other application and infrastructure components,
Microsoft plans to open are its systems for alerts, contact management,
communications (mail and messenger) and authentication.

As it works to build out the underlying core services, Microsoft is
also offering up applications to partners, such as Windows Live
Hotmail, Windows Live Messenger and the Spaces blogging tool.

Combine the emerging advent of "thinner" end-points (read: mobility products) with high-speed, lower latency connectivity and we can see why this model is attractive and viable.  I think this battle is heating up and the consumer will benefit.

A Practical Example of SaaS/InternetOS Today?

So if we take a step back from Google and Microsoft for a minute, let’s take a snapshot of how one might compose, provision, and deploy applications and data as a service using a similar model over the Internet with tools other than Live or GoogleGear.

Let me give you a real-world example — deliverable today — of this capability with a functional articulation of this strategy; on-demand services and applications provided via virtualized datacenter delivery architectures using the Internet as the transport.  I’m going to use a mashup of two technologies: Yahoo Pipes and 3tera’s AppLogic.

Yahoo Pipes is  "…an interactive data aggregator and manipulator that lets you mashup your favorite online data sources."  Assuming you have data from various sources you want to present an application environment such as Pipes will allow you to dynamically access, transform and present this information any way you see fit.

This means that you can create what amounts to application and services on demand. 

Let’s agree however that while you have the data integration/presentation layer, in many cases you would traditionally require a complex collection of infrastructure from which this source data is housed, accessed, maintained and secured. 

However, rather than worry about where and how the infrastructure is physically located, let’s use the notion of utility/grid computing to make available dynamically an on-demand architecture that is modular, reusable and flexible to make my service delivery a reality — using the Internet as a transport.

Enter 3Tera’s AppLogic:

3Tera’s AppLogic is used by hosting providers to offer true utility computing. You get all the control of having your own virtual datacenter, but without the need to operate a single server.

Deploy and operate applications in your own virtual private datacenter

Set up infrastructure, deploy apps and manage operations with just a browser    
Scale from a fraction of a server to hundreds of servers in days

Deploy and run any Linux software without modifications

Get your life back: no more late night rushes to replace failed equipment

In fact, BT is using them as part of the 21CN project which I’ve written about many times before.

So check out this vision, assuming the InternetOS as a transport.  It’s the drag-and-drop, point-and-click Metaverse of virtualized application and data combined with on-demand infrastructure.

You first define the logical service composition and provisioning through 3Tera with a visual drag-drop canvas, defining firewalls, load-balancers, switches, web servers, app. servers, databases, etc.  Then you click the "Go" button.  AppLogic provisions the entire thing for you without you even necessarily knowing where these assets are.

Then, use something like Pipes to articulate how data sources can be accessed, consumed and transformed to deliver the requisite results.  All over the Internet, transparent to you securely.

Very cool stuff.

Here are some screen-caps of Pipes and 3Tera.

Yahoopipes

3tera

 

 

 

Take5- Five Questions for Chris Wysopal, CTO Veracode

June 19th, 2007 No comments

In this first installment of Take5, I interview Chris Wysopal, the CTO of Veracode about his new company, secure coding, vulnerability research and the recent forays into application security by IBM and HP.

This entire interview was actually piped over a point-to-point TCP/IP connection using command-line redirection through netcat.  No packets were harmed during the making of this interview…

First, a little background on the victim, Chris Wysopal:

Wysopalsm
Chris Wysopal is
co-founder and CTO of Veracode. He has testified on Capitol Hill on the subjects of government
computer security and how vulnerabilities are discovered in software. Chris
co-authored the password auditing tool L0phtCrack, wrote the windows version of
netcat, and was a researcher at the security think tank, L0pht Heavy
Industries, which was acquired by @stake. He was VP of R&D at @stake
and later director of development at Symantec, where he led a
team developing binary static analysis technology.

He was influential in
the creation of responsible vulnerability disclosure guidelines and a founder of
the Organization for Internet Safety.  Chris wrote "The Art of
Software Security Testing: Identifying Security Flaws", published by Addison
Wesley and Symantec Press in December 2006. He earned his Bachelor of Science
degree in Computer and Systems Engineering from Rensselaer Polytechnic
Institute.

1) You’re a founder of Veracode
which is described as the industry’s first provider
of automated, on-demand
application security solutions.  What sort of application
security
services does Veracode provide?  Binary analysis, Web Apps?
 
Veracode currently offers binary static analysis of C/C++ applications
for Windows and Solaris and for Java applications.  This allows us to find
the classes of vulnerabilities that source code analysis tools can find but on
the entire codebase including the libraries which you probably don’t have source
code for. Our product roadmap includes support for C/C++ on Linux and C# on
.Net.  We will also be adding additional analysis techniques to our
flagship binary static analysis.
 
2) Is this a SaaS model?
How do you charge for your services?  Do you see
manufacturers
using your services or enterprises?

 
Yes.
Customers upload their binaries to us and we deliver an analysis of their
security flaws via our web portal.  We charge by the megabyte of
code.  We have both software vendors and enterprises who write or outsource
their own custom software using our services.  We also have
enterprises who are purchasing software ask the software vendors to submit their
binaries to us for a 3rd party analysis.  They use this analysis as a
factor in their purchasing decision. It can lead to a "go/no go" decision, a
promise by the vendor to remediate the issues found, or a reduction in price to
compensate for the cost of additional controls or the cost of incident
response that insecure software necessitates.
 
3) I was a Qualys customer
— a VA/VM SaaS company.  Qualys had to spend quite
a bit of time
convincing customers that allowing for the storage of their VA data
was
secure.  How does Veracode address a customer’s security concerns when
uploading their
applications?

We are
absolutely fanatical about the security of our customers data.  I look back
at the days when I was a security consultant where we had vulnerability
data on laptops and corporate file shares and I say, "what were we
thinking?"  All customer data at Veracode is encrypted in storage and at
rest with a unique key per application and customer.  Everyone at Veracode
uses 2 factor authentication to log in and 2 factor is the default for
customers.  Our data center is a SAS 70 Type II facility. All data
access is logged so we know exactly who looked at what and when. As security
people we are professionally paranoid and I think it shows through in the system
we built.  We also believe in 3rd party verification so we have had a top
security boutique do a security review our portal
application.
 
4) With IBM’s acquisition
of Watchfire and today’s announcement that HP will buy
SPI Dynamics, how does
Veracode stand to play in this market of giants who will
be competing to
drive service revenues?

 
We
have designed our solution from the ground up to have the Web 2.0 ease of
use and experience and we have the quality of analysis that I feel is the best
in the market today.  An advantage is Veracode is an independent
assessment company that customers can trust to not play favorites to other
software companies because of partnerships or alliances. Would Moody’s or
Consumer Reports be trusted as a 3rd party if they were part of a big financial
or technology conglomerate? We feel a 3rd party assessment is important in the
security world.
 
5) Do you see the latest
developments in vulnerability research with the drive for
pay-for-zeroday
initiatives pressuring developers to produce secure code out of the box
for
fear of exploit or is it driving the activity to companies like yours?

 
I
think the real driver for developers to produce secure code and for developers
and customers to seek code assessments is the reality that the costs of insecure
code goes up everyday and its adding to the operational risk of companies that
use software.  People exploiting vulnerabilities are not going away
and there is no way to police the internet of vulnerability
information.  The only solution is for customers to demand more secure
code, and proof of it, and for developers to deliver more secure code in
response.