Archive

Archive for the ‘Risk Management’ Category

As Promised: ISO17799-Aligned Set of IT/Information Security P&P’s – Great Rational Starter Kit for a Security Program

August 27th, 2007 14 comments

Giveback_2
Per my offer last week, I received a positive response to my query asking if folks might find useful a set of well-written policy and procedures that were aligned to ISO17799.  I said that I would do the sanitizing work and release them if I got a fair response.

I did and here they are.  This is in Microsoft Word Format.  534 KB.

My only caveats for those who download and use these is please don’t sell them or otherwise engage in commercial activity based upon this work.

I’m releasing it into the wild because I want to help make people’s lives easier and if these P&P’s can help make your security program better, great.  I don’t want anything in return except perhaps that someone else will do something similar.

I must admit that I alluded to a lot of time, sweat and tears that *I* contributed to this document.  To be fair and honest in full disclosure, I did not create the majority of this work; it’s based upon prior art from multiple past lives, and most of it isn’t mine exclusively.

As a level-set reminder:

The P&P’s are a complete package that outline at a high-level
the basis of an ISO-aligned security program; you could basically
search/replace and be good to go for what amounts to 99% of the basic
security coverage you’d need to address most elements of a well-stocked
security pantry.

You can use this “English” high-level summary set to point to
indexed detailed P&P mechanics or standards that are specific to
your organization.

All you need to do is modify the header/footer with your company’s logo & information and do a search/replace for [COMPANY] with your own, and you’ve got a fantastic template to start building from or add onto another framework with.

Please let me know if this is worthwhile and helped you.  I could do all sorts of log tracking to see how many times it’s downloaded, etc., but if you found it helpful (even if you just stash it away for a rainy day) do let me know in the comments, please.

I also have a really good Incident Response Plan that I consolidated from many inputs; that one’s been put through at least one incident horizon and I lived to tell about it.

Regards,

/Hoff

Anyone interested in an ISO17799-Aligned Set of IT/Information Security P&P’s – Great Rational Starter Kit for a Security Program!

August 22nd, 2007 13 comments

Dilbert
I have spent a lot of time, sweat and tears in prior lives chipping away at building a template set of IT/Information Security policies and procedures that were aligned to (and audited against) various regulatory requirements and the 10 Domains/127 Controls of ISO17799.

This consolidated set of P&P’s is intact and well written.  Actual business people have been able to read, understand and (gasp!) comply with them.  I know, "impossible!" you say.  Nay, ’tis rational is all…

As part of my effort to give back, I thought that many of you maybe at a point where while you have lots of P&P’s specific to your business, not having to reinvent the wheel by drafting this sort of polished package yourself or paying someone to do it might be useful.

The P&P’s are a complete package that outline at a high-level the basis of an ISO-aligned security program; you could basically search/replace and be good to go for what amounts to 99% of the basic security coverage you’d need to address most elements of a well-stocked security pantry.

You can use this "English" high-level summary set to point to indexed detailed P&P mechanics or standards that are specific to your organization.

Would this be of some use to you?  I would need to do some work to take care of some rough spots and sanitize the word doc, but if there is enough interest I’ll do it and post it for whomsoever would like it.  Just to be clear, the P&P’s are already written, I’ll just make it SEARCH/REPLACE friendly.

I’m not trying to tease anyone, I just don’t want to do the up-front work if nobody is interested.

Let me know in the comments; no need to leave website links (for obvious reasons) just let me know by your comment if this is something you’d like.  If I get enough demand, I’ll "get her done!"

OK, good enough.  Thanks for the comments.  I’ll post it up in the next few days.  Thanks guys.

/Hoff

Risk Management & InfoSec Operational Combatants – The Leviathan Force & the SysAdmins…the Real Art of War.

August 14th, 2007 11 comments

Suntzu2_2
No, this is not some enlightened pithy post heaping praise on a dead Chinese military strategist. Nothing against his Tzu-ness, but I’m just plain tired of this overplayed guidepost for engaging in InfoSec warfare.  For God’s sake, I own an iPod. I’m, like, so enlightened, sophisticated and refined.  Sun Tzu is so last Wednesday!  The closest I come to Chinese philosophy is whether or not to order the Kung Pao chicken or the Pork Lo Mein.  Just to get that out of the way…

If you haven’t seen Thomas P.M. Barnett’s  talk "The Pentagon’s New Map for War and Peace" from the 2005 TED, you should definitely click here and do so.  Barnett is a brilliant and witty international security strategist who offers a unique contrarian perspective on the post-Cold War US military complex that differs quite drastically from the typical long range strategic planners squatting in the Pentagon:

"In this bracingly honest and funny talk, international security strategist Thomas P.M. Barnett outlines a post-Cold War solution for the foundering US military: Break
it in two. He suggests the military re-form into two groups: a
Leviathan force, a small group of young and fierce soldiers capable of
swift and immediate victories; and an internationally supported network
of System Administrators, an older, wiser, more diverse organization
that actually has the diplomacy and power it takes to build and
maintain peace.
"

What I find amazingly serendipitous about the timing of when I watched Barnett’s presentation is that it rang true with a theme I was mulling on which draws remarkable and absolute parallels to the state and needed resuscitation of how we practically organize the Risk Management and Information Security "combatant" fighting forces in the front lines of corporate America today and the thought processes and doctrines that govern how we operate.

I’m not going to spend much time here presenting this analogy and how it relates to the Risk Management/InfoSec world. 

Watch the video and take a peek at this one excerpt slide below.  Think about how we’re structured to do "battle" in our war against the bad guys.  As Barnett says, what we need is two armies focused on one victory with a division of assets between them; the Leviathan Force and the SysAdmins:

Barnett

I suggest that we need to recognize that the goals of these two forces are really diametrically opposed which is why Ops Staff bristle at all the bag-winding policy, procedures and change control while the Managers lament how the young’uns can’t grasp the concepts of diplomacy and managing risk instead of just threats and vulnerabilities.

We need to organize what we do and how we approach the deployment of resources (forces) around this same concept in balance.  Yet, most people staff up in order to man a posted headcount as part of some mechanical rhythm that has for so long defined how we "do" InfoSec. 

I’m not sure that many people actually have a security strategy that defines a long term achievable objective toward winning the war to achieve peace, but rather keep throwing bodies into the cannon fire as we serially sacrifice combatants as fodder for the sake of fighting.

Some of us actually do organize and hire based upon placing talent that is strategically as well as tactically the right fit for the job.  My observation is that in reality, this practice is far and away the result of the lifecycle management aging of an ever-greying combatant force and nothing more…and it’s usually very, very unbalanced.

Instead, how about aiming to consciously build that Leviathan force of tactical soldiers who live, eat and sleep for "combat" (firewall jockeys, IDS/IPS analysts, etc.) and then take the older, wiser and diverse corps (architects, engineers, etc.) and have them deal with the aftermath — as a networking force — in order to maintain "peace" and let the soldiers go off hunting for the next foxhole to jump in.

Granted, we don’t talk offense in the traditional sense of how we play the game
in our profession and it’s a losing proposition because we’re holding
ourselves hostage to a higher standard and a set of rules our enemies
have no intention to play by.  We organize inappropriately to repel the
opposing force and wonder why we characterize what we do as a losing
battle. 

Now, I’m not outright suggesting that *everyone* run out and deploy first strike
capabilities, but certainly entertain the thought of countermeasures
that are more than a firewall rule and a blacklisted IP address.  We can’t win on defense alone.  Gulp!  There, I said it.

So I’ll ask you again.  Watch that video and think about Risk Management/InfoSec instead of traditional warfighting.  You’ll laugh, you’ll cry and perhaps you’ll think differently about how you deploy your forces, how you fund your campaigns and ultimately which battles you pick to engage in and how.

"Don’t wage the war if you don’t want to win the peace…"

/Hoff

Categories: Risk Management Tags:

Security RROI (Reduction of Risk on Investment)

July 23rd, 2007 5 comments

Money_scale
The security blogosphere sure is exciting these days.  I can’t decide whether to tune into the iPhone junkie wars, the InfoSec Sellout soap opera or the Security ROI cage match!

I’m going to pick the latter because quite honestly, the other two are about as inflated as Bea Arthur’s girdle…

(edit: link added for Cutaway whose predilection towards Bea Arthur and her undergarments are disturbing at best…) Warning…May Cause Chaffing…)

Unless you’ve been under a rock (or actually, gasp!, working) you’ve no doubt seen Rich Bejtlich’s little gem titled "No ROI?  No Problem" that re-kindled all sorts of emotive back and forth debating the existence of Security ROI.

It was revisited by Rich here and then here…and then picked up by Lindstrom, Hutton, Cutaway and the rest of the risk management cognoscenti.  All good stuff.

It seems that the unofficial scoring has the majority of contributors to the debate suggesting that Security ROI does not exist…sort of.  The qualification of the word "return" really seems to be the important lynchpin here as contribution (margin, profit, etc.) versus cost avoidance really is what sends people off the deep end.

It appears that if we define ‘return’ to suggest that what you get back is a way of avoiding shelling out money, then indeed, one may quantify a return on the investment made.

Fine.  I’m good with that.  To a point.

However, I’ve never used ROI in any metric I’ve produced.  NPV?  Nope.  ROSI?  Nuh-uh.

What I have chosen to use is RROI — the reduction of risk on investment.  HA!  Another term.

Basically, I’ve used various combinations of metrics and measurements to quantify data points and answer the question:

"If I invest in some element of my security program (people, process, technology) — or after I have invested in it — am I more secure than I was before and how much more?  Furthermore, how should I manage my investment portfolio to give me the best reduction of risk?"

One doesn’t hire security guards because of an expectation that this action will cause one to be more profitable; it’s a cost of doing business that allows one to asses the risk based on impact and decide how, if at all, one could or should invest in security to defray the impact and cost associated with the event(s) one is trying to mitigate.

Ah yes, the old "why would you spend $1000 to protect a $10 asset?" question.  Can you answer this question for every security investment you make?

I’d say that I’ve always been able to communicate what the "return" (see above) would be on investments made and done so in a manner that has always seen my security budgets grow when necessary and trim when warranted.  The transparency I strive to produce is communicated in business terms that anyone who can understand basic math and business logic can process.  Maybe I’m just lucky. 

I’m not saying I have the problem licked or that I found the holy grail, but the problem just doesn’t seem to be as daunting as some would have you believe.  Start small, be rational and build and manage your portfolio accordingly.

So, how many of you have risk dashboards that can, in near-time, communicate where you invest, why and how this maps to the business and helps you most effectively manage risk per dollar spent?  This is what’s really important.

I’m just wondering that instead of trying to globally force-feed a definition across a contentious landscape of religion and philosophy, perhaps we could spend the time arguing less about terms and more about solving problems.  Ask the business how they want to see your security value communicated and go from there.  If they want ROI, then fine…define the "R" appropriately and move on.

I’m going to "return" to work now… 😉

/Hoff

Profiling Data At the Network-Layer and Controlling It’s Movement Is a Bad Thing?

June 3rd, 2007 2 comments

Carcrash
I’ve been watching what appears like a car crash in slow-motion and for some strange reason I share some affinity and/or responsibility for what is unfolding in the debate between Rory and Rob.

What motivated me to comment on this on-going exploration of data-centric security was Rory’s last post in which he appears to refer to some points I raised in my original post but still bent on the idea that the crux of my concept was tied to DRM:

So .. am I anti-security? Nope I’m extremely pro-security. My feeling
is however that the best way to implement security is in ways which
it’s invisable to users. Every time you make ordinary business people
think about security (eg, usernames/passwords) they try their darndest
to bypass those requirements.

That’s fine and I agree.  The concept of ADAPT is completely transparent to "users."  This doesn’t obviate the fact that someone will have to be responsible for characterizing what is important and relevant to the business in terms of "assets/data," attaching weight/value to them, and setting some policies regarding how to mitigate impact and ultimately risk.

Personally I’m a great fan of network segregation and defence in
depth at the network layer. I think that devices like the ones
crossbeam produce are very useful in coming up with risk profiles, on a
network by network basis rather than a data basis and
managing traffic in that way. The reason for this is that then the
segregation and protections can be applied without the intervention of
end-users and without them (hopefully) having to know about what
security is in place.

So I think you’re still missing my point.  The example I gave of the X-Series using ADAPT takes a combination of best-of-breed security software components such as FW, IDP, WAF, XML, AV, etc. and provides you with segregation as you describe.  HOWEVER, the (r)evolutionary delta here is that the ADAPT profiling of content set by policies which are invisible to the user at the network layer allows one to make security decisions on content in context and control how data moves.

So to use the phrase that I’ve seen in other blogs on this subject,
I think that the "zones of trust" are a great idea, but the zone’s
shouldn’t be based on the data that flows over them, but the
user/machine that are used. It’s the idea of tagging all that data with
the right tags and controlling it’s flow that bugs me.

…and thus it’s obvious that I completely and utterly disagree with this statement.  Without tying some sort of identity (pseudonymity) to the user/machine AND combining it with identifying the channels (applications) and the content (payload) you simply cannot make an informed decision as to the legitimacy of the movement/delivery of this data.

I used the case of being able to utilize client-side tagging as an extension to ADAPT, NOT as a dependency.  Go back and re-read the post; it’s a network-based transparent tagging process that attaches the tag to the traffic as it moves around the network.  I don’t understand why that would bug you?

So that’s where my points in the previous post came from, and I
still reckon their correct. Data tagging and parsing relies on the
existance of standards and their uptake in the first instance and then
users *actually using them* and personally I think that’s not going to
happen in general companies and therefore is not the best place to be
focusing security effort…

Please explain this to me?  What standards need to exist in order to tag data — unless of course you’re talking about the heterogeneous exchange and integration of tagging data at the client side across platforms?  Not so if you do it at the network layer WITHIN the context of the architecture I outlined; the clients, switches, routers, etc. don’t need to know a thing about the process as it’s done transparently.

I wasn’t arguing that this is the end-all-be-all of data-centric security, but it’s worth exploring without deadweighting it to the negative baggage of DRM and the existing DLP/Extrusion Prevention technologies and methodologies that currently exist.

ADAPT is doable and real; stay tuned.

/Hoff

Clean Pipes – Less Sewerage or More Potable Water?

May 6th, 2007 2 comments

Pipesprev
Jeff Bardin over on the CSO blog pitched an interesting stake in the ground when he posited "Connectivity As A Utility: Where are My Clean Pipes?"

Specifically, Jeff expects that his (corporate?) Internet service functions in the same manner as his telephone service via something similar to a "do not call list."  Basically, he opts out by placing himself on the no-call list and telemarketers cease to call. Others might liken it to turning on a tap and getting clean, potable water; you pay for a utility and expect it to be usable.  All of it.

Many telecommunications providers want to charge you for having
clean pipes, deploying a suite of DDoS services that you have to buy to
enhance your security posture.   Protection of last mile bandwidth is
very key to network availability as well as confidentiality and
integrity. If I am subscribing for a full T1, shouldn’t I get the full
T1 as part of the price and not just a segment of the T1? Why do I have
to pay for the spam, probes, scans, and malicious activity that my
telecommunications service provider should prevent at 3 miles out
versus my having to subscribe to another service to attain clean pipes
at my doorstep?

I think that most people would agree with the concept of clean pipes in principle.  I can’t think of any other utility where the service levels delivered are taken with such a lackadaisical best effort approach and where the consumer can almost always expect that some amount (if not the majority) of the utility is unusable. 

Over the last year, I’ve met with many of the largest ISP’s, MSSP’s, TelCo’s and Mobile Operators on the planet and all are in some phase of deploying some sort of clean pipes variant.  Gartner even predicts a large amount of security to move "into the cloud."

In terms of adoption, EMEA is leaps and bounds ahead of the US and APAC in these sorts of services and will continue to be.  The relative oligopolies associated with smaller nation states allows for much more agile and flexible service definition and roll-outs — no less complex, mind you.  It’s incredible to see just how disparate and divergent the gap is between what consumers (SME/SMB/Mobile as well as large enterprise) are offered in EMEA as opposed to the good-ol’ U S of A.

However, the stark reality is that the implementation of clean pipes by your service provider(s) comes down to a balance of two issues: efficacy and economics, with each varying dramatically with the market being served; the large enterprise’s expectations and requirements look very, very different from the SME/SMB.

Let’s take a look at both of these elements.

ECONOMICS

If you ask most service providers about so-called clean pipes up to a year ago, you could expect to get an answer that was based upon a "selfish" initiative aimed at stopping wasteful bandwidth usage upstream in the service provider’s network, not really protecting the consumer. 

The main focus here is really on DDoS and viri/worm propagation.  Today, the closest you’ll come to "clean pipes" is usually some combination of the following services deployed both (still) at the customer premises as well as somewhere upstream:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

What is interesting about these services is that they basically define the same functions you can now get in those small little UTM boxes that consolidate security functionality at the "perimeter."  The capital cost of these devices and the operational levies associated with their upkeep are pretty close in the SME/SMB and when you balance what you get in "good enough" services for this market as well as the overall availability of these "in the cloud" offerings, UTM makes more sense for many in the near term.

For the large enterprise, the story is different.  Outsourcing some level of security to an MSSP (or perhaps even the entire operation) or moving some amount upstream is a matter of core competence and leveraging the focus of having internal teams focus on the things that matter most while the low hanging fruit can be filtered out and monitored by someone else.  I describe that as filtering out the lumps.  Some enormous companies have outsourced not only their security functions but their entire IT operations and data center assets in this manner.  It’s not pretty, but it works.

I’m not sure they are any more secure than they were before, however.  The risk simply was transferred whilst the tolerance/appetite for it didn’t change at all.  Puzzling.

Is it really wrong to think that companies (you’ll notice I said companies, not "people" in the general sense) should pay for clean pipes?  I don’t think it is.  The reality is that for non-commercial subscribers such as home users, broadband or mobile users, some amount of bandwidth hygiene should be free — the potable water approach.

I think, however, that should a company which expects elevated service levels and commensurate guarantees of such, want more secure connectivity, they can expect to ante up.  Why?  Because the investment required to deliver this sort of service costs a LOT of money — both to spin up and to instantiate over time.  You’re going to have to pay for that somewhere.

I very much like Jeff’s statistics:

We stop on average for our organization nearly 600
million malicious emails per year at our doorstep averaging 2.8
gigabytes of garbage per day. You add it up and we are looking at
nearly a terabyte of malicious email we have to stop. Now add in probes
and scans against HTTP and HTTPS sites and the number continues to
skyrocket.

Again, even though Jeff’s organization isn’t small by any means, the stuff he’s complaining about here is really the low-hanging fruit.  It doesn’t bear a dent against the targeted, malicious and financially-impacting security threats that really demands a level of service no service provider will be able to deliver without a huge cost premium.

I won’t bore you with the details, but the level of high-availability,
resilience, performance, manageability, and provisioning required to
deliver even this sort of service is enormous.  Most vendors simply can’t do
it and most service providers are slow to invest in proprietary
solutions that won’t scale economically with the operational models in
place.

Interestingly, vendors such as McAfee even as recently as 2005 announced with much fanfare that they were going to deliver technology, services and a united consortium of participating service providers with the following lofty clean pipe goals (besides selling more product, that is):

The initiative is one
part of a major product and services push from McAfee, which is
developing its next generation of carrier-grade security appliances and
ramping up its enterprise security offerings with NAC and secure
content management product releases planned for the first half of next
year, said Vatsal Sonecha, vice president of market development and
strategic alliances at McAfee, in Santa Clara, Calif.

Clean Pipes will be a major expansion of McAfee’s managed
services offerings. The company will sell managed intrusion prevention;
secure content management; vulnerability management; malware
protection, including anti-virus, anti-spam and anti-spyware services;
and mobile device security, Sonecha said.

McAfee is working with Cable
and Wireless PLC, British Telecommunications PLC (British Telecom),
Telefónica SA and China Network Communications (China Netcom) to tailor
its offerings through an invitation-only group it calls the Clean Pipes
Consortium.

http://www.eweek.com/article2/0,1895,1855188,00.asp

Look at all those services!  What have they delivered as a service in the cloud or clean pipes?  Nada. 

The chassis-based products which were to deliver these services never materialized and neither did the services.  Why?  Because it’s really damned hard to do correctly.  Just ask Inkra, Nexi, CoSine, etc.  Or you can ask me.  The difference is, we’re still in business and they’re not.  It’s interesting to note that every one of those "consortium members" with the exception of Cable and Wireless are Crossbeam customers.  Go figure.

EFFICACY

Once the provider starts filtering at the ingress/egress, one must trust that the things being filtered won’t have an impact on performance — or confidentiality, integrity and availability.  Truth be told, as simple as it seems, it’s not just about raw bandwidth.  Service levels must be maintained and the moment something that is expected doesn’t make its way down the pipe, someone will be screaming bloody murder for "slightly clean" pipes.

Ask me how I know.  I’ve lived through inconsistent application of policies, non-logged protocol filtering, dropped traffic and asymmetric issues introduced by on-prem and in-the-cloud MSSP offerings.  Once the filtering moves past your prem. as a customer, your visibility does too.  Those fancy dashboards don’t do a damned bit of good, either.  Ever consider the forensic impact?

Today, if you asked a service provider what constitutes their approach to clean pipes, most will refer you back to the same list I referenced above:

  • DoS/DDoS
  • Anti-Virus
  • Anti-Spam
  • URL Filtering/Parental Controls
  • Managed Firewall/IDS/IPS

The problem is that most of these solutions are disparate point products run by different business units at different parts of the network.  Most are still aimed at the perimeter service — it’s just that the perimeter has moved outward a notch in the belt.

Look, for the SME/SMB (or mobile user,) "good enough" is, for the most part, good
enough.  Having an upstream provider filter out a bunch of spam and
viri is a good thing and most firewall rules in place in the SME/SMB
block everything but a few inbound ports to DMZ hosts (if there are
any) and allow everything from the inside to go out.  Not very
complicated and it doesn’t take a rocket scientist to see how, from the
perspective of what is at risk, that this service doesn’t pay off
handsomely.

From the large enterprise I’d say that if you are going to expect that operational service levels will be met, think again.  What happens when you introduce web services, SOA and heavy XML onto externally-exposed network stubs.  What happens when Web2/3/4.x technologies demand more and more security layers deployed alongside the mechanics and messaging of the service?

You can expect issues and the lack of transparency will be an issue on all but the most simple of issues.

Think your third party due diligence requirements are heady now?  Wait until this little transference of risk gets analyzed when something bad happens — and it will.  Oh how quickly the pendulum will swing back to managing this stuff in-house again.

This model doesn’t scale and it doesn’t address the underlying deficiencies in the most critical elements of the chain: applications, databases and end-point threats such as co-opted clients as unwilling botnet participants.

But to Jeff’s point, if he didn’t have to spend money on the small stuff above, he could probably spend it elsewhere where he needs it most.

I think services in the cloud/clean pipes makes a lot of sense.  I’d sure as hell like to invest less in commoditizing functions at the perimeter and on my desktop.  I’m just not sure we’re going to get there anytime soon.

/Hoff

*Image Credit: CleanPipes

Read more…

Risk Assessment Does Not Equal Risk Management

March 12th, 2007 1 comment

Riskmgmtfortune
Symantec announced the acquisition of 4FrontSecurity today and will absorb their product/service offerings into Symantec’s Security and Compliance Management group.  The press release sadly describes the deal within the context of a very myopic view of managing risk today:

[the acquisition will]…bring new tools to capture and track procedural controls and measure them against a variety of industry best practices and standards

Put another way, "we’ll dress up compliance management by calling it Risk Management."  And just to be clear, risk assessment is not the same as risk management.

4FrontSecurity is a small company that is focused on an emerging market niche that allows companies to automate the collection, processing, articulation and compliance measurements of risk assessment data.  Again, that’s not the same thing as managing risk.  Managing risk includes asset mapping, business impact, remediation and modeling, amongst other things.  Until we are also able to factor in the human element, risk management tools will never be truly complete. 

I posted last week about Skybox in particular.  RedSeal Systems also has a similar product.  Each of these products provides for the articulation of a company’s risk posture from a slightly different perspective.  I have not had any hands-on experience with RedSeal, but I have with Skybox.  I had zero visibility into 4FrontSecurity’s products, so I have no empirical way of comparing the three products. 

I am frustrated to see that the trend continues as these larger security Risk Management companies (a la Symantec, McAfee, etc.) start to encapsulate this compliance-driven measurement approach within their larger "risk management" messaging while continuing to expand upon their toolset portfolios one acquisition at a time.

Recently, PatchLink acquired STAT from Harris to "…allow PatchLink to improve its vulnerability
management products to help enterprises address risk management and
policy-based compliance."  Vulnerability and patch management does not equal risk management.

I’m glad to see companies using the term Risk Management, I just wish it was within the proper context and wasn’t done to perfume a pig.

/Hoff

Categories: Risk Management Tags:

Web 2.0 can’t be protected by Web 1.0 Security Models when Attackers are at Attacker 3.0…

March 2nd, 2007 No comments

Web20
Gunnar Peterson (1 Raindrop blog) continues to highlight the issues of implementing security models which are not keeping pace with the technology they are deployed to protect.  Notice I didn’t say "designed" to protect.

Specifically, in his latest entry titled "Understand Web 2.0 Security Issues – As Easy as 2, 1, 3" he articulates (once again) the folly of the security problem that we cannot solve because we simply refuse to learn from our mistakes and proactively address security before it becomes a problem:

"So let’s do the math, we have rich Web 2.0 and its rich UI and lots
of disparate data and links, we are protecting these brand new
2007-built apps with a Web 1.0 security model that was invented in
1995. This would not be a bad thing at all if the attacker community
had learned nothing in the last 12 years, alas they have already
upgraded to attacker 3.0, and so can use Web 2.0 to both attack and distribute attacks.

2.0 functionality, 1.0 security, 3.0 attackers. this cannot stand."

A-Friggin’-Men.  Problem is, unless we reboot the entire human race (or at least developers and security folk) it’s going to take a severe meltdown to initiate change.

Oh, and BTW, just because it bugged me when Thomas Ptacek bawked while asking what I meant in a presentation of mine where I said:

"What happens when we hit Web3.0 and we’re still only at
Security 2.4beta11?"

…and he asked:

What does this even mean?

…the answer is simple: Please see Gunnar’s post above.  It’s written much better, but i trust this is all cleared up now?

A Spectacular Risk Management Blog

March 1st, 2007 2 comments

Rmilogo It’s not often that I will read back through every post archived on a blog, but I must tell you that I have found kindred spirits (even if they don’t know it) in Alex and Jack’s RiskAnalys.is blog.  Fantastic stuff.  The work they have done bringing FAIR (Factor Analysis of Information Risk) to the world at large is awesome. 

Something I’d like to do is relate FAIR to OCTAVE which I have used to feed SRM systems like Skybox (because I’m obviously not busy enough…)

I’m not usually at a loss for words, but these guys really, really have an amazing grasp of the realities, vagaries and challenges of assessing, communicating and managing risk. 

Please do yourself a favor and read/subscribe to their blog and better yet, check out FAIR and Risk Management Insight’s (RMI) website.

Really great stuff.

Categories: Risk Management Tags:

People Are Tools…Not Appliances

December 13th, 2006 2 comments

AppliancesAlan Shimel is commenting here on his blog in this post titled "People are not appliances they’re flexible."  In this entry he muses on about vocational "flexibility" and what appears to be the "cosmic humanity" of folks in the IT/Security space.

He also keeps talking about the need to keep buying COTS hardware appliances…he’ll never learn!

Specifically, Alan’s argument (which is orthogonal to the actual topic) is that as specialized appliances proliferate, he disagrees with the fact that the operators and administrators of said appliances must also specialize.  In fact, he waxes on about the apparent good-natured ebb and flow of utilitarian socialism and how ultimately we’re all re-trainable and can fluidly move from one discipline to another irrespective of the realities and vagaries of culture and capability.

Using that as an example it seems that a help-desk admin who deploys patches from one appliance can just pick up and start doing IDS analysis on another?  How about that same  "appliance" technician reading PCI for dummies and starting to manage firewall appliances doing policy manipulation?  Sure, they’re re-trainable, but at what incidental cost?  Seems a little naive of a statement for my tastes.

Mike Murray from nCircle on the other hand suggests that Enterprises inherently gravitate toward silos.  I totally agree — emphatically as we speak about larger Enterprises.  Operationalizing anything within a big machine means that you have political, operational and economic silos occuring naturally.  It’s even a byproduct of compliance, separation of duties and basic audit-output mitigation strategies.  Specializing may be "bad" but it’s what happens. 

Appliances don’t cause this, the quest for money or the love of what you do, does.

Even if Alan ignores the fact that you don’t have to keep buying individual appliances (you can consolidate them) the fact is that different elements within the organization manage the functions on them.   Even on our boxes…when you have firewall, IDP and AV in an X80 chassis, three different groups (perhaps more) manage and operate these solutions.  Silos, each and every one of them.

Nature of the beast.

That being said, this doesn’t mean I don’t disagree that I’d *like* to see more cross-functional representation across solution sets, but it’s just not reality:

Evolution teaches us that too specialized a species is a recipe for
extinction. That is what we need from our appliance models, flexibility
and adaptability, not more silos!  We need to break down the silos and
have interaction among them to improve productivity.

One could take that argument and extrapolate it to explain why people are so polarized on certain issues such as (for example) security and its ultimate place in the Enterprise: in the network or in specialized appliances.   

Innovation, specialization and (dare I say) evolution suggests that survival of the "fittest" can also be traced back to the ability to not just "survive" but thrive based upon the ability to adapt in specificity to what would otherwise be an extinguishing event.  Specialization does not necessarily infer it’s a single temporal event.  The cumulative or incremental effect of successive specialization can also provide an explanation for how things survive.  Take the platypus as an example.  It ended up with a beaver’s tail and a duck’s bill.  Go figure. 😉

What’s important here is the timing of this adaptation and how the movie plays forward.

Hoff