Archive

Archive for the ‘Information Security’ Category

The Challenge of Virtualization Security: Organizational and Operational, NOT Technical

March 25th, 2008 7 comments

Bullfight
Taking the bull by the horns…

I’ve spoken many times over the last year on the impact virtualization brings to the security posture of organizations.  While there are certainly technology issues that we must overcome, we don’t have solutions today that can effectively deliver us from evil. 

Anyone looking for the silver bullet is encouraged to instead invest in silver buckshot.  No shocker there.

There are certainly technology and solution providers looking to help solve these problems, but honestly, they are constrained by the availability and visibility to the VMM/Hypervisors of the virtualization platforms themselves. 

Obviously announcements like VMware’s VMsafe will help turn that corner, but VMsafe requires re-tooling of ISV software and new versions of the virtualization platforms.  It’s a year+ away and only addresses concerns for a single virtualization platform provider (VMware) and not others.

The real problem of security in a virtualized world is not technical, it is organizational and operational.

With the consolidation of applications, operating systems, storage, information, security and networking — all virtualized into a single platform rather than being discretely owned, managed and supported by (reasonably) operationally-mature teams — the biggest threat we face in virtualization is now we have lost not only visibility, but the clearly-defined lines of demarcation garnered from a separation of duties we had in the non-virtualized world.

Many companies have segmented off splinter cells of "virtualization admins" from the server teams and they are often solely responsible for the virtualization platforms which includes the care, feeding, diapering and powderering of not only the operating systems and virtualization platforms, but the networking and security functionality also.

No offense to my brethren in the trenches, but this is simply a case of experience and expertise.  Server admins are not experts in network or security architectures and operations, just as the latter cannot hope to be experts in the former’s domain.

We’re in an arms race now where virtualization brings brilliant flexibility, agility and cost savings to the enterprise, but ultimately further fractures the tenuous relationships between the server, network and security teams.

Now that the first-pass consolidation pilots of virtualizing non-critical infrastructure assets has been held up as beaconing examples of ROI in our datacenters, security and networking teams are exercising their veto powers as virtualization efforts creep towards critical production applications, databases and transactional systems.

Quite simply, the ability to express risk, security posture, compliance, troubleshooting and measureing SLA’s and dependencies within the construct of a virtualized world is much more difficult than in the discretely segregated physical world and when taken to the mat on the issues, the virtual server admins simply cannot address these issues competently within the scope of language of the security and risk teams.

This is going to make for some unneeded friction in what was supposed to be a frictionless effort.  If you thought the security teams were thought of as speed bumps before, you’re not going to like what happens soon when they try to delay/halt a business-driven effort to reduce costs, speed time-to-market, increase availability and enable agility.

I’ll summarize my prior recommendations as to how to approach this conundrum in a follow-on post, but the time is now to get these teams together and craft the end-play strategies and desired end-states for enterprise architecture in a virtualized world before we end up right back where we started 15+ years ago…on the hamster wheel of pain!

/Hoff

The Walls Are Collapsing Around Information Centricity

March 10th, 2008 2 comments

Since Mogull and I collaborate quite a bit on projects and share many thoughts and beliefs, I wanted to make a couple of comments on his last post on Information Centricity and remind the audience at home of a couple of really important points.

Rich’s post was short and sweet regarding the need for Information-Centric solutions with some profound yet subtle guideposts:

For information-centric security to become a reality, in the long term it needs to follow the following principles:

  1. Information (data) must be self describing and defending.
  2. Policies and controls must account for business context.
  3. Information must be protected as it moves from structured to
    unstructured, in and out of applications, and changing business context.
  4. Policies must work consistently through the different defensive layers and technologies we implement.

I’m not convinced this is a complete list, but I’m trying to keep to
my new philosophy of shorter and simpler. A key point that might not be
obvious is that while we have self-defending data solutions, like DRM
and label security, for success they must grow to account for business
context. That’s when static data becomes usable information.

Mike Rothman gave an interesting review of Rich’s post:


The Mogull just laid out your work for the next 10 years. You just
probably don’t know it yet. Yes, it’s all about ensuring that the
fundamental elements of your data are protected, however and wherever
they are used. Rich has broken it up into 4 thoughts. The first one
made my head explode: "Information (data) must be self-describing and
defending."

Now I have to clean up the mess. Sure things like DRM are a
bad start, and have tarnished how we think about information-centric
security, but you do have to start somewhere. The reality is this is a
really long term vision of a problem where I’m not sure how you get
from Point A to Point B. We all talk about the lack of innovation in
security. And how the market just isn’t exciting anymore. What Rich
lays out here is exciting. It’s also a really really really big
problem. If you want a view of what the next big security company does,
it’s those 4 things. And believe me, if I knew how to do it, I’d be
doing it – not talking about the need to do it.

The comments I want to make are three-fold:

  1. Rich is re-stating and Mike’s head is exploding around the exact concepts that Information Survivability represents and the Jericho Forum trumpets in their Ten Commandments.  In fact, you can read all about that in a prior posts I made on the subjects of the Jericho Forum, re-perimeterization, information survivability and information centricity.  I like this post on a process I call ADAPT (Applied Data and Application Policy Tagging) a lot.

    For reference, here are the Jericho Forum’s Ten Commandments. Please see #9:

    Jericho_comm1Jericho_comm2

  2. As mike alluded, DRM/ERM has received a bad rap because of how it’s implemented — which has really left a sour taste in the mouths of the consumer consciousness.  As a business tool, it is the precursor of information centric policy and will become the lynchpin in how we will ultimately gain a foothold on solving the information resiliency/assurance/survivability problem.
  3. As to the innovation and dialog that Mike suggests is lacking in this space, I’d suggest he’s suffering from a bit of Shitake-ism (a-la mushroom-itis.)  The next generation of DLP solutions that are becoming CMP (Content Monitoring and Protection — a term I coined) are evolving to deal with just this very thing.  It’s happening.  Now.

    Further to that, I have been briefed by some very, very interesting companies that are in stealth mode who are looking to shake this space up as we speak.

So, prepare for Information Survivability, increased Information Resilience and assurance.  Coming to a solution near you…

/Hoff

McGovern’s “Ten Mistakes That CIOs Consistently Make That Weaken Enterprise Security”

February 26th, 2008 11 comments

Mrburns
James McGovern over at the Enterprise Architect blog wrote a really fantastic Letterman’s Top 10 of mistakes that CIO’s make regarding enterprise security.  I’ve listed his in its entirety below and added a couple mineself… 😉

  • Use process as a substitute for competence: The answer to every problem is almost always methodology, so you must focus savagely on CMMi and ITIL while not understanding the fact that hackers attack software.
  • Ostritch Principle:
    Since you were so busy aligning with the business which really means
    that you are neither a real IT professional nor business professional,
    you have spent much of your time perfecting memorization of cliche
    phrases and nomenclature and hoping that the problem will go away if
    you ignore it.
  • Putting network engineers in charge of security:
    When will you learn that folks with a network background can’t possibly
    make your enterprise secure. If a hacker attacks software and steals
    data yet you respond with hardware, whom do you really think is going
    to win the battle.
  • Over Rely on your vendors by relabelling them as partners:
    You trust your software vendors and outsourcing firms so much that you
    won’t even perform due diligence on their staff to understand whether
    they have actually received one iota of training
  • Rely primarily on a firewall and antivirus:
    Here is a revelation. Firewalls are not security devices, they are more
    for network hygiene. Ever consider that a firewall can’t possibly stop
    attacks related to cross site scripting, SQL injection and so on.
    Network devices only protect the network and can’t do much nowadays to
    protect applications.
  • Stepping in your own leadership: Authorize reactive, short-term fixes so problems re-emerge rapidly
  • Thinking that security is expensive while also thinking that CMMi isn’t: Why do you continue to fail to realize how much money their information and organizational reputations are worth.
  • The only thing you need is an insulting firm to provide you with a strategy:
    Fail to deal with the operational aspects of security: make a few fixes
    and then not allow the follow through necessary to ensure the problems
    stay fixed
  • Getting it twisted to realize that Business / IT alignment is best accomplished by talking about Security and not SOA:
    Failing to understand the relationship of information security to the
    business problem — they understand physical security but do not see
    the consequences of poor information security. Let’s be honest, your
    SOA is all about integration as you aren’t smart enough to do anything
    else.
  • Put people in roles and give them titles, but don’t actually train them: Assign untrained people to maintain security and provide neither the training nor the time to make it possible to do the job.
  • Here are some of my favorites that I’ve added.  I’ll work on adding the expanded explanations later:

    1. Keep talking about threats and vulnerabilities and not about risk
    2. Manage your security investments like throw-away CapEx cornflakes and not as a portfolio
    3. Maintain that security is a technology issue
    4. Awareness initiatives are good for sexual harassment and copier training, not security
    5. Security is top secret, we can’t talk about what we do
    6. All we need to do is invest just enough to be compliant, we don’t need to be secure
    7. We can’t measure security effectiveness
    8. Virtualization changes nothing in the security space.
    9. We’ve built our three year security strategy and we’re aligned to the business
    10. One audit a year from a trusted third party indicates our commitment to security

    Got any more?

    /Hoff

    Pondering Implications On Standards & Products Due To Cold Boot Attacks On Encryption Keys

    February 22nd, 2008 4 comments

    Scientist
    You’ve no doubt seen the latest handywork of Ed Felten and his team from the Princeton Center for Information Technology Policy regarding cold boot attacks on encryption keys:

    Abstract: Contrary to popular assumption, DRAMs used in
    most modern computers retain their contents for seconds to minutes
    after power is lost, even at operating temperatures and even if removed
    from a motherboard. Although DRAMs become less reliable when they are
    not refreshed, they are not immediately erased, and their contents
    persist sufficiently for malicious (or forensic) acquisition of usable
    full-system memory images. We show that this phenomenon limits the
    ability of an operating system to protect cryptographic key material
    from an attacker with physical access. We use cold reboots to mount
    attacks on popular disk encryption systems — BitLocker, FileVault,
    dm-crypt, and TrueCrypt — using no special devices or materials. We
    experimentally characterize the extent and predictability of memory
    remanence and report that remanence times can be increased dramatically
    with simple techniques. We offer new algorithms for finding
    cryptographic keys in memory images and for correcting errors caused by
    bit decay. Though we discuss several strategies for partially
    mitigating these risks, we know of no simple remedy that would
    eliminate them.

    Check out the video below (if you have scripting disabled, here’s the link.)  Fascinating and scary stuff.

    Would a TPM implementation mitigate this if they keys weren’t stored (even temporarily) in RAM?

    Given the surge lately toward full disk encryption products, I wonder how the market will react to this.  I am interested in both the broad industry impact and response from vendors.  I won’t be surprised if we see new products crop up in a matter of days advertising magical defenses against such attacks as well as vendors scrambling to do damage control.

    This might be a bit of a reach, but equally as interesting to me are the potential implications upon DoD/Military crypto standards such as FIPS140.2 ( I believe the draft of 140.3 is circulating…)  In the case of certain products at specific security levels, it’s obvious based on the video that one wouldn’t necessarily need physical access to a crypto module (or RAM) in order to potentially attack it.

    It’s always amazing to me when really smart people think of really creative, innovative and (in some cases) obvious ways of examining what we all take for granted.

    A Worm By Any Other Name Is…An Information Epidemic?

    February 18th, 2008 2 comments

    Virus
    Martin McKeay took exception to some interesting Microsoft research that suggested that the similar methodologies and tactics used by malicious software such as worms/viri, could also be used as an effective distributed defense against them:

    Microsoft researchers are hoping to use "information epidemics" to distribute software patches more efficiently.

    Milan Vojnović
    and colleagues from Microsoft Research in Cambridge, UK, want to make
    useful pieces of information such as software updates behave more like
    computer worms: spreading between computers instead of being downloaded
    from central servers.

    The research may also help defend against malicious types of worm, the researchers say.

    Software
    worms spread by self-replicating. After infecting one computer they
    probe others to find new hosts. Most existing worms randomly probe
    computers when looking for new hosts to infect, but that is
    inefficient, says Vojnović, because they waste time exploring groups or
    "subnets" of computers that contain few uninfected hosts.

    Despite the really cool moniker (information epidemic,) this isn’t a particularly novel distribution approach and in fact, we’ve seen malware do this.  However, it is interesting to see that an OS vendor (Microsoft) is continuing to actively engage in research to explore this approach despite the opinions of others who simply claim it’s a bad idea.  I’m not convinced either way, however.

    I, for one, am all for resilient computing environments that are aware of their vulnerabilities and can actively defend against them.  I will be interested to see how this new paper builds off of work previously produced on the subject and its corresponding criticism.

    Vojnović’s team have designed smarter strategies that can exploit the way some subnets provide richer pickings than others.

    The
    ideal approach uses prior knowledge of the way uninfected computers are
    spread across different subnets. A worm with that information can focus
    its attention on the most fruitful subnets – infecting a given
    proportion of a network using the smallest possible number of probes.

    But
    although prior knowledge could be available in some cases – a company
    distributing a patch after a previous worm attack, for example –
    usually such perfect information will not be available. So the
    researchers have also developed strategies that mean the worms can
    learn from experience.

    In
    the best of these, a worm starts by randomly contacting potential new
    hosts. After finding one, it uses a more targeted approach, contacting
    only other computers in the same subnet. If the worm finds plenty of
    uninfected hosts there, it keeps spreading in that subnet, but if not,
    it changes tack.

    That being the case, here’s some of Martin’s heartburn:

    But the problem is, if both beneficial and malign
    software show the same basic behavior patterns, how do you
    differentiate between the two? And what’s to stop the worm from being
    mutated once it’s started, since bad guys will be able to capture the
    worms and possibly subverting their programs.

    The article isn’t clear on how the worms will secure their network,
    but I don’t believe this is the best way to solve the problem that’s
    being expressed. The problem being solved here appears to be one of
    network traffic spikes caused by the download of patches. We already
    have a widely used protocols that solve this problem, bittorrents and
    P2P programs. So why create a potentially hazardous situation using
    worms when a better solution already exists. Yes, torrents can be
    subverted too, but these are problems that we’re a lot closer to
    solving than what’s being suggested.

    I don’t want something that’s viral infecting my computer, whether
    it’s for my benefit or not. The behavior isn’t something to be
    encouraged. Maybe there’s a whole lot more to the paper, which hasn’t
    been released yet, but I’m not comfortable with the basic idea being
    suggested. Worm wars are not the way to secure the network.

    I think that some of the points that Martin raises are valid, but I also think that he’s reacting mostly out of fear to the word ‘worm.’  What if we called it "distributed autonomic shielding?" 😉

    Some features/functions of our defensive portfolio are going to need to become more self-organizing, autonomic and intelligent and that goes for the distribution of intelligence and disposition, also.  If we’re not going to advocate being offensive, then we should at least be offensively defensive.  This is one way of potentially doing this.

    Interestingly, this dovetails into some discussions we’ve had recently with Andy Jaquith and Amrit Williams; the notion of herds or biotic propagation and response are really quite fascinating.  See my post titled "Thinning the Herd & Chlorinating the Gene Pool"

    I’ve left out most of the juicy bits of the story so you should go read it and churn on some of the very interesting points raised as part of the discussion.

    /Hoff

    Update: Schneier thinks this is a lousy idea. That doesn’t move me one direction or the other, but I think this is cementing my opinion that had the author not used the word ‘worm’ in his analog the idea might not be dismissed so quickly…

    Also, Wismer via a comment on Martin’s blog pointed to an interesting read from Vesselin Bontchev titled "Are "Good" Computer Viruses Still a Bad Idea?"

    Update #2: See the comments section about how I think the use case argued by Schneier et. al. is, um, slightly missing the point.  Strangely enough, check out the Network World article that just popped up which says ""This was not the primary scenario targeted for this research," according to a statement."

    Duh.

    Security Today == Shooting Arrows Through Sunroofs of Cars?

    February 7th, 2008 14 comments

    Archer_2
    In this Dark Reading post, Peter Tippett, described as the inventor of what is now Norton Anti-virus, suggests that the bulk of InfoSec practices are "…outmoded or outdated concepts that don’t apply to today’s computing
    environments."

    As I read through this piece, I found myself flip-flopping between violent agreement and incredulous eye-rolling from one paragraph to the next, caused somewhat by the overuse of hyperbole in some of his analogies.  This was disappointing, but overall, I enjoyed the piece.

    Let’s take a look at Peter’s comments:

    For example, today’s security industry focuses way too much time
    on vulnerability research, testing, and patching, Tippett suggested.
    "Only 3 percent of the vulnerabilities that are discovered are ever
    exploited," he said. "Yet there is huge amount of attention given to
    vulnerability disclosure, patch management, and so forth."

    I’d agree that the "industry" certainly focuses their efforts on these activities, but that’s exactly the mission of the "industry" that he helped create.  We, as consumers of security kit, have perpetuated a supply-driven demand security economy.

    There’s a huge amount of attention paid to vulnerabilities, patching and prevention that doesn’t prevent because at this point, that’s all we’ve got.  Until we start focusing on the the root cause rather than the symptoms, this is a cycle we won’t break.  See my post titled "Sacred Cows, Meatloaf, and Solving the Wrong Problems" for an example of what I mean.


    Tippett compared vulnerability research with automobile safety
    research. "If I sat up in a window of a building, I might find that I
    could shoot an arrow through the sunroof of a Ford and kill the
    driver," he said. "It isn’t very likely, but it’s possible.


    "If I disclose that vulnerability, shouldn’t the automaker put in
    some sort of arrow deflection device to patch the problem? And then
    other researchers may find similar vulnerabilities in other makes and
    models," Tippett continued. "And because it’s potentially fatal to the
    driver, I rate it as ‘critical.’ There’s a lot of attention and effort
    there, but it isn’t really helping auto safety very much."

    What this really means and Peter doesn’t really ever state, is that mitigating vulnerabilities in the absence of threat, impact or probability is a bad thing.  This is why I make such a fuss about managing risk instead of mitigating vulnerabilities.  If there were millions of malicious archers firing arrows through the sunroofs of unsuspecting Ford Escort drivers, then the ‘critical’ rating is relevant given the probability and impact of all those slings and arrows of thine enemies…

    Tippett also suggested that many security pros waste time trying
    to buy or invent defenses that are 100 percent secure. "If a product
    can be cracked, it’s sometimes thrown out and considered useless," he
    observed. "But automobile seatbelts only prevent fatalities about 50
    percent of the time. Are they worthless? Security products don’t have
    to be perfect to be helpful in your defense."

    I like his analogy and the point he’s trying to underscore.  What I find in many cases is that the binary evaluation of security efficacy — in products and programs — still exists.  In the absence of measuring the effective impact that something has in effecting one’s risk posture, people revert to a non-gradient scale of 0% or 100% insecure or secure.  Is being "secure" really important or is managing to a level of risk that is acceptable — with or without losses — the really relevant measure of success?   

    This concept also applies to security processes, Tippett said.
    "There’s a notion out there that if I do certain processes flawlessly,
    such as vulnerability patching or updating my antivirus software, that
    my organization will be more secure. But studies have shown that there
    isn’t necessarily a direct correlation between doing these processes
    well and the frequency or infrequency of security incidents.


    "You can’t always improve the security of something by doing it
    better," Tippett said. "If we made seatbelts out of titanium instead of
    nylon, they’d be a lot stronger. But there’s no evidence to suggest
    that they’d really help improve passenger safety."

    I would like to see these studies.  I think that companies who have rigorous, mature and transparent processes that they execute "flawlessly" may not be more "secure," (a measurement I’d love to see quantified) but are in a much better position to respond and recover when (not if) an event occurs.  Based upon the established corollary that we can’t be 100% "secure" in the first place, we then know we’re going to have incidents.

    Being able to recover from them or continue to operate while under duress is more realistic and important in my view.  That’s the point of information survivability.


    Security teams need to rethink the way they spend their time,
    focusing on efforts that could potentially pay higher security
    dividends, Tippett suggested. "For example, only 8 percent of companies
    have enabled their routers to do ‘default deny’ on inbound traffic," he
    said. "Even fewer do it on outbound traffic. That’s an example of a
    simple effort that could pay high dividends if more companies took the
    time to do it."

    I agree.  Focusing on efforts that eliminate entire classes of problems based upon reducing risk is a more appropriate use of time, money and resources.

    Security awareness programs also offer a high
    rate of return, Tippett said. "Employee training sometimes gets a bad
    rap because it doesn’t alter the behavior of every employee who takes
    it," he said. "But if I can reduce the number of security incidents by
    30 percent through a $10,000 security awareness program, doesn’t that
    make more sense than spending $1 million on an antivirus upgrade that
    only reduces incidents by 2 percent?"

    Nod.  That was the point of the portfolio evaluation process I gave in my disruptive innovation presentation:

    24. Provide Transparency in portfolio effectiveness
    Isd2007031_2

    I didn’t invent this graph, but it’s one of my favorite ways of
    visualizing my investment portfolio by measuring in three dimensions:
    business impact, security impact and monetized investment.  All of
    these definitions are subjective within your organization (as well as
    how you might measure them.)

    The Y-axis represents the "security impact" that the solution
    provides.  The X-axis represents the "business impact" that the
    solution provides while the size of the dot represents the capex/opex
    investment made in the solution.

    Each of the dots represents a specific solution in the portfolio.

    If you have a solution that is a large dot toward the bottom-left of
    the graph, one has to question the reason for continued investment
    since it provides little in the way of perceived security and business
    value with high cost.   On the flipside, if a solution is represented
    by a small dot in the upper-right, the bang for the buck is high as is
    the impact it has on the organization.

    The goal would be to get as many of your investments in your
    portfolio from the bottom-left to the top-right with the smallest dots
    possible.

    This transparency and the process by which the portfolio is assessed
    is delivered as an output of the strategic innovation framework which
    is really comprised of part art and part science.

    All in all, a good read from someone who helped create the monster and is now calling it ugly…

    /Hoff

    OMG, Availability Trumps Security! Oh, the Horror!

    February 1st, 2008 3 comments

    Community
    Michael Farnum’s making me shake my head today in confusion based upon a post wherein he’s shocked that some businesses may favor availability over (ahem) "security." 

    Classically we’ve come to know and love (a)vailability as a component of
    security — part of the holy triumvirate paired with (c)onfidentiality
    and (i)ntegrity — but somehow it’s now incredulous that one of
    these concerns can matter more to a business than the others.

    If one measures business impact against an asset, are you telling me, Mike, that all three are always equal?  Of course not…

    Depending upon what’s important to maintain operations as an on-going concern or what is required as a business decision to be more critical, being available even under degraded service levels may be more important than preserving or enforcing confidentiality and integrity.  To some, it may not.

    The reality is that this isn’t an issue of absolutes.  The measured output of the investments in C, I and A aren’t binary — you’re not only either 0% or 100% secure.   There are shades of gray.  Decisions are often made such that one of the elements of C, I and A are deemed more relevant or more important.

    Businesses often decide to manage risk by trading off one leg of the stool for another.  You may very end up with a wobbly seat, but there’s a difference between what we see in textbooks and what the realities in the field actually are.

    Deal with it.  Sometimes businesses make calculated bets that straddle the fine line of acceptable loss and readiness and decide to invest in certain things versus others.  Banks to this all the time.  Their goal is to be right more often than they are wrong.  They manage their risk.  They generally do it well.  Depending upon the element in question, sometimes A wins.  Sometimes it doesn’t.

    Here’s a test.  Go turn off your Internet firewall and tell everyone you’re perfectly secure now.  Will everyone high-five you for a job well done? 

    Firewall’s down.  Business stops.  Not for "security’s sake."  Pushed the wrong button…

    Compensating controls can help offset effects against C and I, but if an asset or service is not A(vailable) what good is it?  Again, this depends on the type of asset/service and YMMV.  Sometimes C or I win.

    Thanks to the glut of security band-aids we’ve thrown at tackling "security" problems these days, availability has become — quite literally — a function of security.  As we see the trend move from managing "security" toward managing "risk," we’ll see more of this heresy common sense appear as mainstream thinking.

    Since we can’t seem to express (for the most part) how things like firewalls translate to a healthier bottom line, better productivity or efficiency, it’s no wonder businesses are starting to look to actionable risk management strategies that focuses on operational business impact instead.

    Measuring availability (at the macro level or transactionally) is easy.  IT knows how to do this.  Either something is available or it isn’t.  How do you measure confidentiality of integrity as a repeatable metric?

    In my comment to Michael (and Kurt Wismer) I note:

    It’s funny how allergic you and Wismer are toward the notion that managing risk may mean that “security” (namely C and I) isn’t always the priority.  Basic risk assessment process shows us that in many cases “availability” trumps "security." 

    This can’t be a surprise to either of you.

    Depending upon your BCP/DR/Incident Response capabilities, the notion of a breakdown in C or I can be overcome by resilience that also has the derivative effect of preserving A.

    Risk Management != Security. 

    However, good Security helps to reinforce and enforce those things which lend themselves toward making better decisions on how to manage risk.

    What’s so hard to understand about that?

    Sounds perfectly reasonable to me.

    Security’s in the eye of the beholder.  Stop sticking your thumb in yours 😉

    Speaking of which Twitter’s down. Damn!  Unavailability strikes again!

    /Hoff

    Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

    January 10th, 2008 11 comments

    Phatburger_2
    Juicy Fat Assets, Ripe For the Picking…

    So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

    For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

    1. Assume a company that:
      …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
    2. Take the following:
      Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
    3. Combine With:
      Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
    4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
    5. You Get:
      Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

    I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

    If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

    If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

    …I mean besides the fact that an entire industry has been leeching off this mess for decades…


    I’ll Gladly Pay You Tuesday For A Secure Solution Today…

    The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

    Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

    The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

    Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

    Relax, Dude…Keep Your Firewalls…

    In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

    Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

    Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

    Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

    This goes for internal corporate users who are chained to their desks and not just mobile users.

    What’s preventing you from doing this today?

    /Hoff

    Thinning the Herd & Chlorinating the Malware Gene Pool…

    December 28th, 2007 3 comments

    Anchovyswarm
    Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security.  He followed it up here. 

    All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan’s interpretations shed an interesting light on a problem solving perspective.

    I’ve got a couple of comments on Matt and Alan’s scribbles.

    I like the notion of swarms/herds.  The picture to the right from Science News describes the
    notion of "rapid response," wherein "mathematical modeling is
    explaining how a school of fish can quickly change shape in reaction to
    a predator."  If you’ve ever seen this in the wild or even in film,
    it’s an incredible thing to see in action.

    It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally.  This is exactly what I was getting at in my "useful predictions" post for 2008:

    Grid and distributed utility computing models will start to creep into security
    A
    really interesting by-product of the "cloud compute" model is that as
    data, storage, networking, processing, etc. get distributed, so shall
    security.  In the grid model, one doesn’t care where the actions take
    place so long as service levels are met and the experiential and
    business requirements are delivered.  Security should be thought of in
    exactly the same way. 

    The notion that you can point to a
    physical box and say it performs function ‘X’ is so last Tuesday.
    Virtualization already tells us this.  So, imagine if your security
    processing isn’t performed by a monolithic appliance but instead is
    contributed to in a self-organizing fashion wherein the entire
    ecosystem (network, hosts, platforms, etc.) all contribute in the
    identification of threats and vulnerabilities as well as function to
    contain, quarantine and remediate policy exceptions.

    Sort
    of sounds like that "self-defending network" schpiel, but not focused
    on the network and with common telemetry and distributed processing of
    the problem.
    Check out Red Lambda’s cGrid technology for an interesting view of this model.

    This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

    This is what Andy was referring to when he said:

    As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

    It
    may be hard to convince rival vendors to work together because of the
    perception that it could lessen differentiation between their
    respective products and services, but if the process clearly aids on
    the process of quelling the rising tide of new malware strains, the
    software makers may have little choice other than to partner, he said.

    Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

    "By
    turning every endpoint into a malware collector, the herd network
    effectively turns into a giant honeypot that can see more than existing
    monitoring networks," said Jaquith. "Scale enables the herd to counter
    malware authors’ strategy of spraying huge volumes of unique malware
    samples with, in essence, an Internet-sized sensor network."

    I couldn’t agree more!  This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM’s for honeypots on distributed end nodes:

    I clarified that what I meant was actually integrating a
    HoneyPot running in a VM on a production host as part of a standardized
    deployment model for virtualized environments.  I suggested that this
    would integrate into the data collection and analysis models the same
    was as a "regular" physical HoneyPot machine, but could utilize some of
    the capabilities built into the VMM/HV’s vSwitch to actually make the
    virtualization of a single HoneyPot across an entire collection of VM’s
    on a single physical host.

    Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC’s such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it’s subscribing members.  Generally-available services like Symantec’s DeepSight have also tried to accomplish similar goals.

    Unfortunately, these offerings generally lack the capacity to garner ubiquitous data gathering and real-time enforcement capabilities.

    As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives.  This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition. 

    This requires technology that we’re starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today. 

    Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack — at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

    I am hopeful that as we gain more information-centric footholds, we shouldn’t actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect. 

    Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters.  As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we’ll start to become more resilient and information assured.

    The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

    Mooooooo.

    /Hoff

    Complexity: The Enemy of Security? Or, If It Ain’t Fixed, Don’t Break It…

    December 12th, 2007 4 comments

    Hammerhead
    When all you have is a hammer, everything looks like a nail…

    A couple of days ago, I was concerned (here) that I had missed Don Weber’s point (here)
    regarding how he thinks solutions like UTM that consolidate multiple
    security functions into a single solution increased complexity and
    increased risk
    .

    I was interested in more detail regarding Don’s premise for his argument, so I asked him for some substantiating background information before I responded:

    The question I have for Don is simple: how is it that you’ve
    arrived at the conclusion that the consolidation and convergence of
    security functionality from multiple discrete products into a
    single-sourced solution adds "complexity" and leads to "increased risk?"

    Can you empirically demonstrate this by giving us an example of
    where a single function security device that became a multiple function
    security product caused this complete set combination of events to occur:

    1. Product complexity increased
    2. Lead to a vulnerability that was exploitable and
    3. Increased "risk" based upon business impact and exposure

    Don was kind enough to respond to my request with a rather lengthy
    post titled "The Perimeter Is Dead —  Let’s Make It More Complex."  I knew that I wouldn’t get the example I wanted, but I did get
    what I expected.  I started to write a very detailed response but stopped when I realized a couple of important things in reading his post as well as many of the comments:

    • It’s clear that many folks simply don’t understand the underlying internal operating principles and architectures of security products on the market, and frankly for the most part they really shouldn’t have to.  However, if you’re going to start debating security architecture and engineering implementation of security software and hardware, it’s somewhat unreasonable to start generalizing and creating bad analogs about things you clearly don’t have experience with. 
       
    • Believe it or not, most security companies that create bespoke security solutions do actually hire competent product management and engineering staff with the discipline, processes and practices that result in just a *little* bit more than copy/paste integration of software.  There are always exceptions, but if this were SOP, how many of them would still be in business?
       
    • The FUD that vendors are accused of spreading to supposedly motivate consumers to purchase their products is sometimes outdone by the sheer lack of knowledge illustrated by the regurgitated drivel that is offered by people suggesting why these same products are not worthy of purchase. 

      In markets that have TAMs of $4+ Billion, either we’re all incompetent lemmings (to be argued elsewhere) or there are some compelling reasons for these products.  Sometimes it’s not solely security, for sure, but people don’t purchase security products with the expectations of being less secure with products that are more complex and put them more at risk.  Silliness.
       

    • I find it odd that the people who maintain that they must have diversity in their security solution providers gag when I ask them for proof that they have invested in multiple switch and router vendors across their entire enterprise, that they deliberately deploy critical computing assets on disparate operating systems and that they have redundancy for all critical assets in their enterprise…including themselves. 
       
    • It doesn’t make a lot of sense arguing about the utility, efficacy, usability and viability of a product with someone who has never actually implemented the solution they are arguing about and instead compares proprietary security products with a breadboard approach to creating a FrankenWall of non-integrated open source software on a common un-hardened Linux distro.
       
    • Using words like complexity and risk within a theoretical context that has no empirical data offered to back it up short of a "gut reaction" and some vulnerability advisories in generally-available open source software lacks relevancy and is a waste of electrons.

    I have proof points, ROI studies, security assessment results to the code level, and former customer case studies that demonstrate that some of the most paranoid companies on the planet see fit to purchase millions of dollars worth of supposedly "complex risk-increasing" solutions like these…I can tell you that they’re not all lemmings.

    Again, not all of those bullets are directed at Don specifically, but I sense we’re
    really just going to talk past one another on this point and the emails I’m getting trying to privately debate this point are agitating to say the least.

    Your beer’s waiting, but expect an arm wrestle before you get to take the first sip.

    /Hoff