Archive

Archive for the ‘Web Application Security’ Category

Why Amazon Web Services (AWS) Is the Best Thing To Happen To Security & Why I Desperately Want It To Succeed

November 29th, 2012 15 comments

Many people who may only casually read my blog or peer at the timeline of my tweets may come away with the opinion that I suffer from confirmation bias when I speak about security and Cloud.

That is, many conclude that I am pro Private Cloud and against Public Cloud.

I find this deliciously ironic and wildly inaccurate. However, I must also take responsibility for this, as anytime one threads the needle and attempts to present a view from both sides with regard to incendiary topics without planting a polarizing stake in the ground, it gets confusing.

Let me clear some things up.

Digging deeper into what I believe, one would actually find that my blog, tweets, presentations, talks and keynotes highlight deficiencies in current security practices and solutions on the part of providers, practitioners and users in both Public AND Private Cloud, and in my own estimation, deliver an operationally-centric perspective that is reasonably critical and yet sensitive to emergent paths as well as the well-trodden path behind us.

I’m not a developer.  I dabble in little bits of code (interpreted and compiled) for humor and to try and remain relevant.  Nor am I an application security expert for the same reason.  However, I spend a lot of time around developers of all sorts, those that write code for machines whose end goal isn’t to deliver applications directly, but rather help deliver them securely.  Which may seem odd as you read on…

The name of this blog, Rational Survivability, highlights my belief that the last two decades of security architecture and practices — while useful in foundation — requires a rather aggressive tune-up of priorities.

Our trust models, architecture, and operational silos have not kept pace with the velocity of the environments they were initially designed to support and unfortunately as defenders, we’ve been outpaced by both developers and attackers.

Since we’ve come to the conclusion that there’s no such thing as perfect security, “survivability” is a better goal.  Survivability leverages “security” and is ultimately a subset of resilience but is defined as the “…capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.”  You might be interested in this little ditty from back in 2007 on the topic.

Sharp readers will immediately recognize the parallels between this definition of “survivability,” how security applies within context, and how phrases like “design for failure” align.  In fact, this is one of the calling cards of a company that has become synonymous with (IaaS) Public Cloud: Amazon Web Services (AWS.)  I’ll use them as an example going forward.

So here’s a line in the sand that I think will be polarizing enough:

I really hope that AWS continues to gain traction with the Enterprise.  I hope that AWS continues to disrupt the network and security ecosystem.  I hope that AWS continues to pressure the status quo and I hope that they do it quickly.

Why?

Almost a decade ago, the  Open Group’s Jericho Forum published their Commandments.  Designed to promote a change in thinking and operational constructs with respect to security, what they presciently released upon the world describes a point at which one might imagine taking one’s most important assets and connecting them directly to the Internet and the shifts required to understand what that would mean to “security”:

  1. The scope and level of protection should be specific and appropriate to the asset at risk.
  2. Security mechanisms must be pervasive, simple, scalable, and easy to manage.
  3. Assume context at your peril.
  4. Devices and applications must communicate using open, secure protocols.
  5. All devices must be capable of maintaining their security policy on an un-trusted network.
  6. All people, processes, and technology must have declared and transparent levels of trust for any transaction to take place.
  7. Mutual trust assurance levels must be determinable.
  8. Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control
  9. Access to data should be controlled by security attributes of the data itself
  10. Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges
  11. By default, data must be appropriately secured when stored, in transit, and in use.

These seem harmless enough today, but were quite unsettling when paired with the notion of “de-perimieterization” which was often misconstrued to mean the immediate disposal of firewalls.  Many security professionals appreciated the commandments for what they expressed, but the the design patterns, availability of solutions and belief systems of traditionalists constrained traction.

Interestingly enough, now that the technology, platforms, and utility services have evolved to enable these sorts of capabilities, and in fact have stressed our approaches to date, these exact tenets are what Public Cloud forces us to come to terms with.

If one were to look at what public cloud services like AWS mean when aligned to traditional “enterprise” security architecture, operations and solutions, and map that against the Jericho Forum’s Commandments, it enables such a perfect rethink.

Instead of being focused on implementing “security” to protect applications and information based at the network layer — which is more often than not blind to both, contextually and semantically — public cloud computing forces us to shift our security models back to protecting the things that matter most: the information and the conduits that traffic in them (applications.)

As networks become more abstracted, it means that existing security models do also.  This means that we must think about security programatticaly and embedded as a functional delivery requirement of the application.

“Security” in complex, distributed and networked systems is NOT a tidy simple atomic service.  It is, unfortunately, represented as such because we choose to use a single noun to represent an aggregate of many sub-services, shotgunned across many layers, each with its own context, metadata, protocols and consumption models.

As the use cases for public cloud obscure and abstract these layers — flattens them — we’re left with the core of that which we should focus:

Build secure, reliable, resilient, and survivable systems of applications, comprised of secure services, atop platforms that are themselves engineered to do the same in way in which the information which transits them inherits these qualities.

So if Public Cloud forces one to think this way, how does one relate this to practices of today?

Frankly, enterprise (network) security design patterns are a crutch.  The screened-subnet DMZ patterns with perimeters is outmoded. As Gunnar Peterson eloquently described, our best attempts at “security” over time are always some variation of firewalls and SSL.  This is the sux0r.  Importantly, this is not stated to blame anyone or suggest that a bad job is being done, but rather that a better one can be.

It’s not like we don’t know *what* the problems are, we just don’t invest in solving them as long term projects.  Instead, we deploy compensation that defers what is now becoming more inevitable: the compromise of applications that are poorly engineered and defended by systems that have no knowledge or context of the things they are defending.

We all know this, but yet looking at most private cloud platforms and implementations, we gravitate toward replicating these traditional design patterns logically after we’ve gone to so much trouble to articulate our way around them.  Public clouds make us approach what, where and how we apply “security” differently because we don’t have these crutches.

Either we learn to walk without them or simply not move forward.

Now, let me be clear.  I’m not suggesting that we don’t need security controls, but I do mean that we need a different and better application of them at a different level, protecting things that aren’t tied to physical topology or addressing schemes…or operating systems (inclusive of things like hypervisors, also.)

I think we’re getting closer.  Beyond infrastructure as a service, platform as a service gets us even closer.

Interestingly, at the same time we see the evolution of computing with Public Cloud, networking is also undergoing a renaissance, and as this occurs, security is coming along for the ride.  Because it has to.

As I was writing this blog (ironically in the parking lot of VMware awaiting the start of a meeting to discuss abstraction, networking and security,) James Staten (Forrester) tweeted something from @Werner Vogels keynote at AWS re:invent:

I couldn’t have said it better myself 🙂

So while I may have been, and will continue to be, a thorn in the side of platform providers to improve the “survivability” capabilities to help us get from there to there, I reiterate the title of this scribbling: Amazon Web Services (AWS) Is the Best Thing To Happen To Security & I Desperately Want It To Succeed.

I trust that’s clear?

/Hoff

P.S. There’s so much more I could/should write, but I’m late for the meeting 🙂

Related articles

 

Enhanced by Zemanta

Should/Can/Will Virtual Firewalls Replace Physical Firewalls?

October 15th, 2012 6 comments
Simulação da participação de um Firewall entre...

Simulação da participação de um Firewall entre uma LAN e uma WAN Français : Schéma d’un pare-feu entre un LAN et un WAN (Photo credit: Wikipedia)

“Should/Can/Will Virtual Firewalls Replace Physical Firewalls?”

The answer is, as always, “Of course, but not really, unless maybe, you need them to…” 🙂

This discussion crops up from time-to-time, usually fueled by a series of factors which often lack the context to appropriately address it.

The reality is there exists the ever-useful answer of “it depends,” and frankly it’s a reasonable answer.

Back in 2008 when I created “The Four Horsemen of the Virtualization Security Apocalypse” presentation, I highlighted the very real things we needed to be aware of as we saw the rapid adoption of server virtualization…and the recommendations from virtualization providers as to the approach we should take in terms of securing the platforms and workloads atop them.  Not much has changed in almost five years.

However, each time I’m asked this question, I inevitably sound evasive when asking for more detail when the person doing the asking references “physical” firewalls and what it is they mean.  Normally the words “air-gap” are added to the mix.

The very interesting thing about how people answer this question is that in reality, the great many firewalls that are deployed today have the following features deployed in common:

  1. Heavy use of network “LAG” (link aggregation group) interface bundling/VLAN trunking and tagging
  2. Heavy network virtualization used, leveraging VLANs as security boundaries, trunked across said interfaces
  3. Increased use of virtualized contexts and isolated resource “virtual systems” and separate policies
  4. Heavy use of ASIC/FPGA and x86 architectures which make use of shared state tables, memory and physical hardware synced across fabrics and cluster members
  5. Predominant use of “stateful inspection” at layers 2-4 with the addition of protocol decoders at L5-7 for more “application-centric” enforcement
  6. Increasing use of “transparent proxies” at L2 but less (if any) full circuit or application proxies in the classic sense

So before I even START to address the use cases of the “virtual firewalls” that people reference as the comparison, nine times out of ten, that supposed “air gap” with dedicated physical firewalls that they reference usually doesn’t compute.

Most of the firewall implementations that people have meet most of the criteria mentioned in items 1-6 above.

Further, most firewall architectures today aren’t running full L7 proxies across dedicated physical interfaces like in the good old days (Raptor, etc.) for some valid reasons…(read the postscript for an interesting prediction.)

Failure domains and the threat modeling that illustrates cascading impact due to complexity, outright failure or compromised controls is usually what people are interested in when asking this question, but this gets almost completely obscured by the “physical vs. virtual” concern and we often never dig deeper.

There are some amazing things that can be done in virtual constructs that we can’t do in the physical and there are some pretty important things that physical firewalls can provide that virtual versions have trouble with.  It’s all a matter of balance, perspective, need, risk and reward…oh, and operational simplicity.

I think it’s important to understand what we’re comparing when asking that question before we conflate use cases, compare and mismatch expectations, and make summary generalizations (like I just did 🙂 about that which we are contrasting.

I’ll actually paint these use cases in a follow-on post shortly.

/Hoff

POSTSCRIPT:

I foresee that we will see a return of the TRUE application-level proxy firewall — especially with application identification, cheap hardware, more security and networking virtualized in hardware.  I see this being deployed both on-premise and as part of a security as a service offering (they are already, today — see CloudFlare, for example.)

If you look at the need to terminate SSL/TLS and provide for not only L4-L7 sanity, protect applications/sessions at L5-7 (web and otherwise) AND the renewed dependence upon XML, SOAP, REST, JSON, etc., it will drive even more interesting discussions in this space.  Watch as the hybrid merge of the WAF+XML security services gateway returns to vogue… (see also Cisco EOLing ACE while I simultaneously receive an email from Intel informing me I can upgrade to their Intel Expressway Service Gateway…which I believe (?) was from the Cervega Sarvega acqusition?)

Enhanced by Zemanta

AMI Secure? (Or: Shared AMIs/Virtual Appliances – Bot or Not?)

October 8th, 2009 6 comments

angel-devilTo some of you, this is going to sound like obvious and remedial advice that you would consider common sense.  This post is not for you.

Some of you — and you know who you are — are going to walk away from this post with a scratching sound coming from inside your skull.

The convenience of pre-built virtual appliances offered up for use in virtualized environments such as VMware’s Virtual Appliance marketplace or shared/community AMIs on AWS EC2 make for a tempting reduction of time spent getting your virtualized/cloud environments up to speed; the images are there just waiting for a a quick download and then a point and click activation.  These juicy marketplaces will continue to sprout up with offerings of bundled virtual machines for every conceivable need: LAMP stacks, databases, web servers, firewalls…you name it.  Some are free, some cost money.

There’s a darkside to this convenience. You have no idea as to the trustworthiness of the underlying operating systems or applications contained within these tidy bundles of cloudy joy.  The same could be said for much of the software in use today, but cloud simply exacerbates this situation by adding abstraction, scale and the elastic version of the snuggie that convinces people nothing goes wrong in the cloud…until it does

While trust in mankind is noble, trust in software is a palm-head-slapper.  Amazon even tells you so:

AMIs are launched at the user’s own risk. Amazon cannot vouch for the integrity or security of AMIs shared by other users. Therefore, you should treat shared AMIs as you would any foreign code that you might consider deploying in your own data center and perform the appropriate due diligence.

Ideally, you should get the AMI ID from a trusted source (a web site, another user, etc). If you do not know the source of an AMI, we recommended that you search the forums for comments on the AMI before launching it. Conversely, if you have questions or observations about a shared AMI, feel free to use the AWS forums to ask or comment.

Remember that in IaaS-based service offerings, YOU are responsible for the security of your instances.  Do you really know where an AMI/VM/VA came from, what’s running on it and why?  Do you have the skills to be able to answer this question?  How would you detect if something was wrong? Are you using hardening tools?  Logging tools?  Does any of this matter if the “box” is rooted anyway?

As I talk about in my Frogs and Cloudifornication presentations — and as the guys from Sensepost have shown — there’s very little to stop someone from introducing a trojaned/rootkitted AMI or virtual appliance that gets utilized by potentially thousands of people.  Instead of having to compromise clients on the Internet, why not just pwn system images that have the use of elastic cloud resources instead?

Imagine someone using auto-scaling and using a common image to spool up hundreds (more?) instances — infected instances.  Two words: instant Botnet.

There’s no outbound filtering (via security groups) via AWS, so exfiltrating your data would be easy. Registering C&C botnet channels would be trivial, especially over common ports.  Oh, don’t forget that in most IaaS offerings, resource consumption is charged incrementally, so the “owner” gets to pay doubly for the fun — CPU, storage and network traffic could be driven sky high.  Another form of EDoS (economic denial of sustainability.)

Given the fact that we’ve seen even basic DDoS attacks go undetected by these large providers despite their claims, the potential is frightening.

As the AWS admonishment above suggests, apply the same (more, actually) common sense regarding using these shared AMIs and virtual machines as you would were you to download and execute applications on your workstation or visit a website, or…oh, man…this is just a losing proposition. ;(

If you can avoid it, please build your own AMIs or virtual machines or consider trusted sources that can be vetted and for which the provenance and relative integrity can be derived. Please don’t use shared images if you can avoid it.  Please ensure that you know what you’re getting yourself into.

Play safe.

/Hoff

* P.S. William Vambenepe (@vambenepe) reminded me of the other half of this problem when he said (on Twitter) “…it’s not just using someone’s AMI that’s risky. Sharing your AMI can be too http://bit.ly/1qMxgN ” < A great post on what happens when people build AMIs/VMs/VAs with, um, unintended residue left over…check out his great post.

DDoS – A Moose On Cloud’s Table Or A Pea Under The Mattress?

September 7th, 2009 7 comments

DDoSReaders of my blog will no doubt be familiar with Roland Dobbins.  He’s commented on lots of posts here and whilst we don’t always see eye-to-eye, I really respect both his intellect and his style.

So it’s fair to say that Roland is not a shy lad.  Formerly at Cisco and now at Arbor, he’s made his position (and likely his living) on dealing with a rather unpleasant issue in the highly distributed and networked InterTubes: Distributed Denial of Service (DDoS) attacks.

A recent article in ITWire titled “DDoS, the biggest threat to Cloud Computing” sums up Roland’s focus:

“According to Roland Dobbins, solutions architect for network security specialist Arbor Networks, distributed denial of service attacks are one of the must under-rated and ill-guarded against security threats to corporate IT, and in particular the biggest threat facing cloud computing.”

DDOS, Dobbins claims, is largely ignored in many discussions around network and cloud computing security. “Most discussions around cloud security are centred around privacy, confidentially, the separation of data from the application logic, but the security elephant in the room that very few people seem to want to talk about is DDOS. This is the number one security threat facing the cloud model,” he told last week’s Ausnog conference in Sydney.

“In cloud computing where infrastructure is shared by potentially millions of users, DDOS attacks have the potential to have much greater impact than against single tenanted architectures,” Dobbins argues. Yet, he says, “The cloud providers emerging as leaders don’t tend to talk much about their resiliency to DDOS attacks.”

Depending upon where you stand, especially if we’re talking about Public Clouds — and large Public Cloud providers such as Google, Amazon, Microsoft, etc. — you might cock your head to one side, raise an eyebrow and focus on the sentence fragment “…and in particular the biggest threat facing cloud computing.”  One of the reasons DDoS is under-appreciated is because in relative frequency — and in the stable of solutions and skill sets to deal with them — DDoS is a long tail event.

With unplanned outages afflicting almost all major Cloud providers today, the moose on the table seems to be good ol’ internal operational issues at the moment…that’s not to say it won’t become a bigger problem as the models for networked Cloud resources changes, but as the model changes, so will the defensive options in the stable.

With the decentralization of data but the mass centralization of data centers featured by these large Cloud providers, one might see how this statement could strike fear into the hearts of potential Cloud consumers everywhere and Roland is doing his best to serve us a warning — a Public (denial of) service announcement.

Sadly, at this point, however, I’m not convinced that DDoS is “the biggest threat facing Cloud Computing” and whilst providers may not “…talk much about their resiliency to DDoS attacks,” some of that may likely be due to the fact that they don’t talk much about security at all.  It also may be due to the fact that in many cases, what we can do to respond to these attacks is directly proportional to the size of your wallet.

Large network and service providers have been grappling with DDoS for years, so have large enterprises.  Folks like Roland have been on the front lines.

Cloud will certainly amplify the issues of DDoS because of how resources — even when distributed and resiliently load balanced in elastic and “perceptively infinitely scalable” ways — are ultimately organized, offered and consumed.  This is a valid point.

But if we look at the heart of most criminal elements exploiting the Internet today (and what will become Cloud,) you’ll find that the great majority want — no, *need* — victims to be available.  If they’re not, there’s no exploiting them.  DDoS is blunt force trauma — with big, messy, bloody blows that everybody notices.  That’s simply not very good for business.

At the end of the day, I think DDoS is important to think about.  I think variations of DDoS are, too.

I think that most service providers are thinking about it and investing in technology from companies such as Cisco and Arbor to deal with it, but as Roland points out, most enterprises are not — and if Cloud has its way, they shouldn’t have to:

Paradoxically, although Dobbins sees DDOS as the greatest threat to cloud computing, he also sees it as the potential solution for organisations grappling with the complexities of securing the network infrastructure.

“One answer is to get rid of all IT systems and hand them over to an organisation that specialises in these things. If the cloud providers are following best practice and have the visibility to enable them to exert control over their networks it is possible for organisation to outsource everything to them.”

For those organisations that do run their own data centres, he suggests they can avail themselves of ‘clean pipe’ services which protect against DDOS attacks According to Nick Race, head of Arbor Networks Australia, Telstra, Optus and Nextgen Networks all offer such services.

So what about you?  Moose on the table or pea under the mattress?

/Hoff

A Shout Out to My Boy Grant Bourzikas…It’s How We Roll…

January 19th, 2008 2 comments

I was reading Jeremiah Grossman’s review of Fortify’s film "The New Face of Cybercrime" (watch the trailer here) and noted this little passage in his review:

Then in a bold move, Roger Thorton (CTO of Fortify) and director
Fredric Golding (with the 3 other panelists), opened things up to the
audience to comment and ask questions. Right when they did that I was
thinking to myself, OMG, these guys are crazy asking an infosec what
they thought! To their credit they were very patient and professional
in dealing with the many inane “constructive” criticisms voiced.

The
stand out of the panelists was Grant Bourzikas, CISO of Scottrade, who
was able to answer pointed question masterfully from “business”
interest perspective. Clearly he has been around the block once or
twice when it comes to web application security in the real world.

I was thrilled that Jeremiah pointed Grant out.  See, G. was one of my biggest enterprise customers at Crossbeam and I can tell you that he and the rest of the Scottrade security team know their stuff.  They have an incredible service architecture with one of the most robust security strategies you’ve seen in a business that lives and dies by the uptime SLAs they keep; availability is a function of security and Grant and his team do a phenomenal job maintaining both.

I can personally attest to the fact that he’s been around the block more than a couple of times 😉  It’s very, very cool to see someone like Jeremiah recognize someone like Grant — since I know both of them it’s a double-whammy for me because of how much respect I have for each of them.

Wow.  This got a little mushy, huh?  I guess I just miss him and his bobble-head doll (inside joke, sorry Evan.)

My only question is how did Grant manage to escape St. Louis?

/Hoff

Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

January 10th, 2008 11 comments

Phatburger_2
Juicy Fat Assets, Ripe For the Picking…

So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

  1. Assume a company that:
    …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
  2. Take the following:
    Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
  3. Combine With:
    Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
  4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
  5. You Get:
    Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

…I mean besides the fact that an entire industry has been leeching off this mess for decades…


I’ll Gladly Pay You Tuesday For A Secure Solution Today…

The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

Relax, Dude…Keep Your Firewalls…

In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

This goes for internal corporate users who are chained to their desks and not just mobile users.

What’s preventing you from doing this today?

/Hoff

2008 Security Predictions — They’re Like Elbows…

December 3rd, 2007 6 comments

Carnacangled_2
Yup.  Security predictions are like elbows.  Most everyone’s got at least two, they’re usually ignored unless rubbed the wrong way but when used appropriately, can be devastating in a cage match…

So, in the spirit of, well, keeping up with the Jones’, I happily present you with Hoff’s 2008 Information (in)Security Predictions.  Most of them are feature attacks/attack vectors.  A couple are ooh-aah trends.  Most of them are sadly predictable.  I’ve tried to be more specific than "cybercrime will increase."

I’m really loathe do these, but being a futurist, the only comfort I can take is that nobody can tell me that I’m wrong today 😉

…and in the words of Carnac the Magnificent, "May the winds of the Sahara blow a desert scorpion up your turban…"

  1. Nasty Virtualization Hypervisor Compromise
    As the Hypervisor gets thinner, more of the guts will need to be exposed via API or shed to management and functionality-extending toolsets, expanding the attack surface with new vulnerabilities.  To wit, a Hypervisor-compromising malware will make it’s first in-the-wild appearance to not only produce an exploit, but obfuscate itself thanks to the magic of virtualization in the underlying chipsets.  Hang on to yer britches, the security vendor product marketing SpecOps Generals are going to scramble the fighters with a shock and awe campaign of epic "I told you so" & "AV isn’t dead, it’s just virtualized" proportions…Security "strategery" at it’s finest.

  2. Major Privacy Breach of a Social Networking Site
    With the broadening reach of application extensibility and Web2.0 functionality, we’ll see a major privacy breach via social network sites such as MySpace, LinkedIn or Facebook via the usual suspects (CSRF, XSS, etc.) and via host-based Malware that 0wns unsuspecting Millenials and utilizes the interconnectivity offered to turn these services into a "social botnet" platform with a wrath the likes of which only the ungoldly lovechild of Storm, Melissa, and Slammer could bring…

  3. Integrity Hack of a Major SaaS Vendor
    Expect a serious bit of sliminess to occur with real financial impact to occur from a SaaS vendor’s offering.  With professional cybercrime on the rise, the criminals will go not only where the money is, but also after the data that describes where that money is.  Since much of the security of the SaaS model counts on the integrity and not just the availability of the hosted service, a targeted attack which holds hostage the (non-portable) data and threatens its integrity could have devastating effects on the companies who rely on it.  SalesForce, anyone?
     
  4. Targeted eBanking Compromise with substantial financial losses
    Get ready for a nasty eBanking focused compromise that starts to unravel the consumer confidence in this convenient utility; not directly because of identity abuse (note I didn’t say identity theft) but because of the business model impact it will bring to the banks.   These types of direct attacks (beyond phishing) will start to push the limits of acceptable loss for the financial institutions and their insurers and will start to move the accountability/responsibility more heavily down to the eBanker.  A tiered service level will surface with greater functionality/higher transaction limits being offered with a trade-off of higher security/less convenience.  Same goes for credit/debit cards…priceless!
  5. A Major state-sponsored espionage and cyberAttack w/disruption of U.S. government function
    We saw some of the more noisy examples of low-level crack attacks via our Chinese friends recently, but given the proliferation of botnets, the inexcusably poor levels of security in government systems and network security, we’ll see a targeted attack against something significant.  It’ll be big.  It’ll be public.  It’ll bring new legislation…Isn’t there some little election happening soon?  This brings us to…
  6. Be-Afraid-A of a SCADA compromise…the lunatics are running the asylum!
    Remember that leaked DHS "turn your generator into a roman candle" video that circulated a couple of months ago?  Get ready to see the real thing on prime time news at 11.  We’ve got decades of legacy controls just waiting for the wrong guy to flip the right switch.  We just saw an "insider" of a major water utility do naughty things, imagine if someone really motivated popped some goofy pills and started playing Tetris with the power grid…imagine what all those little SCADA doodads are hooked to…
     
  7. A Major Global Service/Logistics/Transportation/Shipping/Supply-Chain Company will be compromised via targeted attack
    A service we take for granted like UPS, FedEx, or DHL will have their core supply chain/logistics systems interrupted causing the fragile underbelly of our global economic interconnectedness to show itself, warts and all.  Prepare for huge chargebacks on next day delivery when all those mofo’s don’t get their self-propelled, remote-controlled flying UFO’s delivered from Amazon.com.

  8. Mobile network attacks targeting mobile broadband
    So, you don’t use WiFi because it’s insecure, eh?  Instead, you fire up that Verizon EVDO card plugged into your laptop or tether to your mobile phone instead because it’s "secure."  Well, that’s going to be a problem next year.  Expect to see compromise of the RF you hold so dear as we all scramble to find that next piece of spectrum that has yet to be 0wn3d…Google’s 700Mhz spectrum, you say? Oh, wait…WiMax will save us all…
     
  9. My .txt file just 0wn3d me!  Is nothing sacred!?  Common file formats and protocols to cause continued unnatural acts
    PDF’s, Quicktime, .PPT, .DOC, .XLS.  If you can’t trust the sanctity of the file formats and protocols from Adobe, Apple and Microsoft, who can you trust!?  Expect to see more and more abuse of generic underlying software plumbing providing the conduit for exploit.  Vulnerabilities that aren’t fixed properly combined with a dependence on OS security functionality that’s only half baked is going to mean that the "Burros Gone Wild" video you’re watching on YouTube is going to make you itchy in more ways than one…

  10. Converged SensorNets
    In places like the UK, we’ve seen the massive deployment of CCTV monitoring of the populous.  In places like South Central L.A., we have ballistic geo-location and detection systems to track gunshots.  We’ve got GPS in phones.  In airports we have sniffers, RFID passport processing, biometrics and "Total Recall" nudie scanners.  The PoPo have license plate recognition.  Vegas has facial recognition systems.  Our borders have motion, heat and remote sensing pods.  start knitting this all together and you have massive SensorNets — all networked — and able to track you to military precision.  Pair that with GoogleMaps/Streets and I’ll be able to tell what color underwear you had on at the Checkout counter of your local Qwik-E-Mart when you bought that mocha slurpaccino last Tuesday…please don’t ask me how I know.

  11. Information Centric Security Phase One
    It should come as no surprise that focusing our efforts on the host and the network has led to the spectacular septic tank of security we have today.  We need to focus on content in context and set policies across platform and transport to dictate who, how, when, where, and why the creation, modification, consumption and destruction of data should occur.  In this first generation of DLP/CMF solutions (which are being integrated into the larger base of "Information" centric "assurance" solutions,) we’ve taken the first step along this journey.  What we’ll begin to see in 2008 is the information equivalent of the Mission Impossible self-destructing recording…only with a little more intelligence and less smoke.  Here come the DRM haters…
     
  12. The Attempted Coup to Return to Centralized Computing with the Paradox of Distributed Data
    Despite the fact that data is being distributed to the far reaches of the Universe, the wonders of economics combined with the utility of some well-timed technology is seeing IT & Security (encouraged by the bean counters) attempting to reel the genie back in the bottle and re-centralize the computing (desktop, server, application and storage) experience back into big boxes tucked safely away in some data center somewhere.  Funny thing is, with utility/grid computing and SaaS, the data center is but an abstraction, too.  Virtualization companies will become our dark overlords as they will control the very fabric of our digital lives…2008 is when we’ll really start to use the web as the platform for the delivery of all applications, served through streamed desktops on thinner and thinner clients.

So, that’s all I could come up with.  I don’t really have a formulaic empirical model like Stiennon.  I just have a Guiness and start complaining.  This is what I came up with.

In more ways than one, I hope I’m terribly wrong on most of these.

/Hoff

[Edit: Please see my follow-on post titled "And Now Some Useful 2008 Information Survivability Predictions" which speak to some interesting less gloomy things I predict to happen in 2008]

On-Demand SaaS Vendors Able to Secure Assets Better than Customers?

August 16th, 2007 4 comments

Puzzle
I’m a big advocate of software as a service (SaaS) — have been for years.  This evangelism started for me almost 5 years ago when I become a Qualys MSSP customer listening to Philippe Courtot espouse the benefits of SaaS for vulnerability management.  This was an opportunity to allow me to more efficiently, effectively and cheaply manage my VA problem.  They demonstrated how they were good custodians of the data (my data) that they housed and how I could expect they would protect it.

I did not, however, feel *more* secure because they housed my VA data.  I felt secure enough that how they housed it should not fall into the wrong hands.  It’s called an assessment of risk and exposure.  I performed it and was satisfied it matched my company’s appetite and business requirements.

Not one to appear unclear on where I stand, I maintain that the SaaS can bring utility, efficiency, cost effectiveness, enhanced capabilities and improved service levels to a corporation depending upon who, what, why, how, where and when the service is deployed.  Sometimes it can bring a higher level of security to an organization, but so can an armed squadron of pissed off armed Oompa Loompa’s — it’s all a matter of perspective.

Oompa
I suggest that attempting to qualify the benefits of SaaS by generalizing in any sense is, well, generally a risky thing to do.  It often turns what could be a valid point of interest into a point of contention.

Such is the case with a story I read in a UK edition of IT Week by Phil Muncaster titled "On Demand Security Issues Raised."  In this story, the author describes the methods in which the security posture of SaaS vendors may be measured, comparing the value, capabilities and capacity of the various options and the venue for evaluating an SaaS MSSP:  hire an external contractor or rely on the MSSP to furnish you the results of an internally generated assessment.

I think this is actually a very useful and valid discussion to have — whom to trust and why?  In many cases, these vendors house sensitive and sometimes confidential data regarding an enterprise, so security is paramount.  One would suggest that anyone looking to engage an MSSP of any sort, especially one offering a critical SaaS, would perform due diligence in one form or another before signing on the dotted line.

That’s not really what I wanted to discuss, however.

What I *did* want to address was the comment in the article coming from Andy Kellett, an analyst for Burton, that read thusly:

"Security is probably less a problem than in the end-user organisations
because [on-demand app providers] are measured by the service they provide,"
Kellett argued.

I *think* I probably understand what he’s saying here…that security is "less of a problem" for an MSSP because the pressures of the implied penalties associated with violating an SLA are so much more motivating to get security "right" that they can do it far more effectively, efficiently and better than a customer.

This is a selling point, I suppose?  Do you, dear reader, agree?  Does the implication of outsourcing security actually mean that you "feel" or can prove that you’re more secure or better secured than you could do yourself by using a SaaS MSSP?

"I don’t agree the end-user organisation’s pen tester of choice
should be doing the testing. The service provider should do it and make that
information available."

Um, why?  I can understand not wanting hundreds of scans against my service in an unscheduled way, but what do you have to hide?  You want me to *trust* you that you’re more secure or holding up your end of the bargain?  Um, no thanks.  It’s clear that this person has never seen the results of an internally generated PenTest and how real threats can be rationalized away into nothingness…

Clarence So of Salesforce.com
agreed, adding that most chief information officers today understand that
software-as-a-service (SaaS) vendors are able to secure data more effectively
than they can themselves.

Really!?  It’s not just that they gave into budget pressures, agreed to transfer the risk and reduce OpEx and CapEx?  Care to generalize more thoroughly, Clarence?  Can you reference proof points for me here?  My last company used Salesforce.com, but as the person who inherited the relationship, I can tell you that I didn’t feel at all more "secure" because SF was hosting my data.  In fact, I felt more exposed.

"I’m sure training companies have their own motives for advocating the need
for in-house skills such as penetration testing," he argued. "But any
suggestions the SaaS model is less secure than client-server software are well
wide of the mark."

…and any suggestion that they are *more* secure is pure horsecock marketing at its finest.  Prove it.  And please don’t send me your SAS-70 report as your example of security fu.

So just to be clear, I believe in SaaS.  I encourage its use if it makes good business sense.  I don’t, however, agree that you will automagically be *more* secure.  You maybe just *as* secure, but it should be more cost-effective to deploy and manage.  There may very well be cases (I can even think of some) where one could be more or less secure, but I’m not into generalizations.

Whaddya think?

/Hoff

Secure Services in the Cloud (SSaaS/Web2.0) – InternetOS Service Layers

July 13th, 2007 2 comments

Internet
The last few days of activity involving Google and Microsoft have really catalyzed some thinking and demonstrated some very intriguing indicators as to how the delivery of applications and services is dramatically evolving. 

I don’t mean the warm and fuzzy marketing fluff.  I mean some real anchor technology investments by the big-boys putting their respective stakes in the ground as they invest hugely in redefining their business models to setup for the future.

Enterprises large and small are really starting to pay attention to the difference between infrastructure and architecture and this has a dramatic effect on the service providers and supply chain who interact with them.

It’s become quite obvious that there is huge business value associated with divorcing the need for "IT" to focus on physically instantiating and locating "applications" on "boxes" and instead  delivering "services" with the Internet/network as the virtualized delivery mechanism.

Google v. Microsoft – Let’s Get Ready to Rumble!

My last few posts on Google’s move to securely deliver a variety of applications and services represents the uplift of the "traditional" perspective of backoffice SaaS offerings such as Salesforce.com but also highlights the migration of desktop applications and utility services to the "cloud" also.

This is really executing on the vision of the thin-client Internet-centric vision from back in the day o’ the bubble when we saw a ton of Internet-borne services such as storage, backup, etc.  using the "InternetOS" as the canvas for service.

So we’ve talked about Google.  I maintain that their strategy is to ultimately take on Microsoft — including backoffice, utility and desktop applications.  So let’s look @ what the kids from Redmond are up to.

What Microsoft is developing towards with their vision of CloudOS was just recently expounded upon by one Mr. Ballmer.

Not wanting to lose mindshare or share of wallet, Microsoft is maneuvering to give the customer control over how they want to use applications and more importantly how they might be delivered.  Microsoft Live bridges the gap between the traditional desktop and puts that capability into the "cloud."

Let’s explore that a little:

In addition to making available its existing services, such as mail and
instant messaging, Microsoft also will create core infrastructure
services, such as storage and alerts, that developers can build on top
of. It’s a set of capabilities that have been referred to as a "Cloud OS," though it’s not a term Microsoft likes to use publicly.

Late last month, Microsoft introduced two new Windows Live Services,
one for sharing photos and the other for all types of files. While
those services are being offered directly by Microsoft today, they
represent the kinds of things that Microsoft is now promising will be
also made available to developers.

Among the other application and infrastructure components,
Microsoft plans to open are its systems for alerts, contact management,
communications (mail and messenger) and authentication.

As it works to build out the underlying core services, Microsoft is
also offering up applications to partners, such as Windows Live
Hotmail, Windows Live Messenger and the Spaces blogging tool.

Combine the emerging advent of "thinner" end-points (read: mobility products) with high-speed, lower latency connectivity and we can see why this model is attractive and viable.  I think this battle is heating up and the consumer will benefit.

A Practical Example of SaaS/InternetOS Today?

So if we take a step back from Google and Microsoft for a minute, let’s take a snapshot of how one might compose, provision, and deploy applications and data as a service using a similar model over the Internet with tools other than Live or GoogleGear.

Let me give you a real-world example — deliverable today — of this capability with a functional articulation of this strategy; on-demand services and applications provided via virtualized datacenter delivery architectures using the Internet as the transport.  I’m going to use a mashup of two technologies: Yahoo Pipes and 3tera’s AppLogic.

Yahoo Pipes is  "…an interactive data aggregator and manipulator that lets you mashup your favorite online data sources."  Assuming you have data from various sources you want to present an application environment such as Pipes will allow you to dynamically access, transform and present this information any way you see fit.

This means that you can create what amounts to application and services on demand. 

Let’s agree however that while you have the data integration/presentation layer, in many cases you would traditionally require a complex collection of infrastructure from which this source data is housed, accessed, maintained and secured. 

However, rather than worry about where and how the infrastructure is physically located, let’s use the notion of utility/grid computing to make available dynamically an on-demand architecture that is modular, reusable and flexible to make my service delivery a reality — using the Internet as a transport.

Enter 3Tera’s AppLogic:

3Tera’s AppLogic is used by hosting providers to offer true utility computing. You get all the control of having your own virtual datacenter, but without the need to operate a single server.

Deploy and operate applications in your own virtual private datacenter

Set up infrastructure, deploy apps and manage operations with just a browser    
Scale from a fraction of a server to hundreds of servers in days

Deploy and run any Linux software without modifications

Get your life back: no more late night rushes to replace failed equipment

In fact, BT is using them as part of the 21CN project which I’ve written about many times before.

So check out this vision, assuming the InternetOS as a transport.  It’s the drag-and-drop, point-and-click Metaverse of virtualized application and data combined with on-demand infrastructure.

You first define the logical service composition and provisioning through 3Tera with a visual drag-drop canvas, defining firewalls, load-balancers, switches, web servers, app. servers, databases, etc.  Then you click the "Go" button.  AppLogic provisions the entire thing for you without you even necessarily knowing where these assets are.

Then, use something like Pipes to articulate how data sources can be accessed, consumed and transformed to deliver the requisite results.  All over the Internet, transparent to you securely.

Very cool stuff.

Here are some screen-caps of Pipes and 3Tera.

Yahoopipes

3tera

 

 

 

Take5- Five Questions for Chris Wysopal, CTO Veracode

June 19th, 2007 No comments

In this first installment of Take5, I interview Chris Wysopal, the CTO of Veracode about his new company, secure coding, vulnerability research and the recent forays into application security by IBM and HP.

This entire interview was actually piped over a point-to-point TCP/IP connection using command-line redirection through netcat.  No packets were harmed during the making of this interview…

First, a little background on the victim, Chris Wysopal:

Wysopalsm
Chris Wysopal is
co-founder and CTO of Veracode. He has testified on Capitol Hill on the subjects of government
computer security and how vulnerabilities are discovered in software. Chris
co-authored the password auditing tool L0phtCrack, wrote the windows version of
netcat, and was a researcher at the security think tank, L0pht Heavy
Industries, which was acquired by @stake. He was VP of R&D at @stake
and later director of development at Symantec, where he led a
team developing binary static analysis technology.

He was influential in
the creation of responsible vulnerability disclosure guidelines and a founder of
the Organization for Internet Safety.  Chris wrote "The Art of
Software Security Testing: Identifying Security Flaws", published by Addison
Wesley and Symantec Press in December 2006. He earned his Bachelor of Science
degree in Computer and Systems Engineering from Rensselaer Polytechnic
Institute.

1) You’re a founder of Veracode
which is described as the industry’s first provider
of automated, on-demand
application security solutions.  What sort of application
security
services does Veracode provide?  Binary analysis, Web Apps?
 
Veracode currently offers binary static analysis of C/C++ applications
for Windows and Solaris and for Java applications.  This allows us to find
the classes of vulnerabilities that source code analysis tools can find but on
the entire codebase including the libraries which you probably don’t have source
code for. Our product roadmap includes support for C/C++ on Linux and C# on
.Net.  We will also be adding additional analysis techniques to our
flagship binary static analysis.
 
2) Is this a SaaS model?
How do you charge for your services?  Do you see
manufacturers
using your services or enterprises?

 
Yes.
Customers upload their binaries to us and we deliver an analysis of their
security flaws via our web portal.  We charge by the megabyte of
code.  We have both software vendors and enterprises who write or outsource
their own custom software using our services.  We also have
enterprises who are purchasing software ask the software vendors to submit their
binaries to us for a 3rd party analysis.  They use this analysis as a
factor in their purchasing decision. It can lead to a "go/no go" decision, a
promise by the vendor to remediate the issues found, or a reduction in price to
compensate for the cost of additional controls or the cost of incident
response that insecure software necessitates.
 
3) I was a Qualys customer
— a VA/VM SaaS company.  Qualys had to spend quite
a bit of time
convincing customers that allowing for the storage of their VA data
was
secure.  How does Veracode address a customer’s security concerns when
uploading their
applications?

We are
absolutely fanatical about the security of our customers data.  I look back
at the days when I was a security consultant where we had vulnerability
data on laptops and corporate file shares and I say, "what were we
thinking?"  All customer data at Veracode is encrypted in storage and at
rest with a unique key per application and customer.  Everyone at Veracode
uses 2 factor authentication to log in and 2 factor is the default for
customers.  Our data center is a SAS 70 Type II facility. All data
access is logged so we know exactly who looked at what and when. As security
people we are professionally paranoid and I think it shows through in the system
we built.  We also believe in 3rd party verification so we have had a top
security boutique do a security review our portal
application.
 
4) With IBM’s acquisition
of Watchfire and today’s announcement that HP will buy
SPI Dynamics, how does
Veracode stand to play in this market of giants who will
be competing to
drive service revenues?

 
We
have designed our solution from the ground up to have the Web 2.0 ease of
use and experience and we have the quality of analysis that I feel is the best
in the market today.  An advantage is Veracode is an independent
assessment company that customers can trust to not play favorites to other
software companies because of partnerships or alliances. Would Moody’s or
Consumer Reports be trusted as a 3rd party if they were part of a big financial
or technology conglomerate? We feel a 3rd party assessment is important in the
security world.
 
5) Do you see the latest
developments in vulnerability research with the drive for
pay-for-zeroday
initiatives pressuring developers to produce secure code out of the box
for
fear of exploit or is it driving the activity to companies like yours?

 
I
think the real driver for developers to produce secure code and for developers
and customers to seek code assessments is the reality that the costs of insecure
code goes up everyday and its adding to the operational risk of companies that
use software.  People exploiting vulnerabilities are not going away
and there is no way to police the internet of vulnerability
information.  The only solution is for customers to demand more secure
code, and proof of it, and for developers to deliver more secure code in
response.