Home > De-Perimeterization, General Rants & Raves, Information Survivability, Jericho Forum > Sacred Cows, Meatloaf, and Solving the Wrong Problems…

Sacred Cows, Meatloaf, and Solving the Wrong Problems…

Spaf_small_2Just as I finished up a couple of posts decrying the investments being made in lumping device after device on DMZ boundaries for the sake of telling party guests that one subscribes to the security equivalent of the "Jam of the Month Club," (AKA Defense-In-Depth) I found a fantastic post on the CERIAS blog where Prof. Eugene Spafford wrote a fantastic piece titled "Solving Some of the Wrong Problems."

In the last two posts (here and here,) I used the example of the typical DMZ and it’s deployment as a giant network colander which, despite costing hundreds of thousands of dollars, doesn’t generally deliver us from the attacks it’s supposedly designed to defend against — or at least those that really matter.

This is mostly because these "solutions" treat the symptoms and not the problem but we cling to the technology artifacts because it’s the easier road to hoe.

I’ve spent a lot of time over the last few months suggesting that people ought to think differently about who, what, why and how they are focusing their efforts.  This has come about due to some enlightenment I received as part of exercising my noodle using my blog.  I’m hooked and convinced it’s time to make a difference, not a buck.

My rants on the topic (such as those regarding the Jericho Forum) have induced the curious wrath of technology apologists who have no answers beyond those found in a box off the shelf.

I found such resonance in Spaf’s piece that I must share it with you. 

Yes, you.  You who have chided me privately and publicly for my recent proselytizing that our efforts are focused on solving the wrong sets of problems.   The same you who continues to claw disparately at your sacred firewalls whilst we have many of the tools to solve a majority of the problems we face, and choose to do otherwise.  This isn’t an "I told you so."  It’s a "You should pay attention to someone who is wiser than you and I."

Feel free to tell me I’m full of crap (and dismiss my ramblings as just that,) but I don’t think that many can claim to have earned the right to suggest that Spaf has it wrong dismiss Spaf’s thoughts offhandedly given his time served and expertise in matters of information assurance, survivability and security:

As I write this, I’m sitting in a review of some university research
in cybersecurity. I’m hearing about some wonderful work (and no, I’m
not going to identify it further). I also recently received a
solicitation for an upcoming workshop to develop “game changing” cyber
security research ideas. What strikes me about these efforts —
representative of efforts by hundreds of people over decades, and the
expenditure of perhaps hundreds of millions of dollars — is that the
vast majority of these efforts have been applied to problems we already
know how to solve.

We know how to prevent many of our security problems — least
privilege, separation of privilege, minimization, type-safe languages,
and the like. We have over 40 years of experience and research about
good practice in building trustworthy software, but we aren’t using
much of it.

Instead of building trustworthy systems (note — I’m not referring to
making existing systems trustworthy, which I don’t think can succeed)
we are spending our effort on intrusion detection to discover when our
systems have been compromised.

We spend huge amounts on detecting botnets and worms, and deploying
firewalls to stop them, rather than constructing network-based systems
with architectures that don’t support such malware.

Instead of switching to languages with intrinsic features that
promote safe programming and execution, we spend our efforts on tools
to look for buffer overflows and type mismatches in existing code, and
merrily continue to produce more questionable quality software.

And we develop almost mindless loyalty to artifacts (operating
systems, browsers, languages, tools) without really understanding where
they are best used — and not used. Then we pound on our selections as
the “one, true solution” and justify them based on cost or training or
“open vs. closed” arguments that really don’t speak to fitness for
purpose. As a result, we develop fragile monocultures that have a
particular set of vulnerabilities, and then we need to spend a huge
amount to protect them. If you are thinking about how to secure Linux
or Windows or Apache or C++ (et al), then you aren’t thinking in terms
of fundamental solutions.

Please read his entire post.  It’s wonderful. Dr. Spafford, I apologize for re-posting so much of what you wrote, but it’s so fantastically spot-on that I couldn’t help myself.

Timing is everything.

/Hoff

{Ed: I changed the sentence regarding Spaf above after considering Wismer’s comments below.  I didn’t mean to insinuate that one should preclude challenging Spaf’s assertions, but rather that given his experience, one might choose to listen to him over me any day — and I’d agree!  Also, I will get out my Annie Oakley decoder ring and address that Cohen challenge he brought up after at least 2-3 hours of sleep… ;) }

  1. Adam
    October 16th, 2007 at 19:58 | #1

    Defiantly good timing, but this is theory vs practice. In theory things such as ROI, compliance, and sleeping well at night are not considered.
    Let's ponder for moment. Developers are better skilled, they write better code, and thus we have no IDS/IPS because we have no security defects. There is still a flaw. Developers are human, software testers are human, there will be security defects. Since we know humans not perfect and thus code can never be perfect, we must have another layer to tell us what happened when something went wrong or even another layer of protection relying on even more imperfect human written code.
    Enter real world factors. We have compliance standards such as PCI which require IDS/IPS, since we are required to use such technologies we demand better products, thus justifying the ROI for the vendor to invest more in old research (see I tend to agree with the theory). Granted, if PCI and other regulatory standards changed their approach then we could too. (see below)
    The ROI for full source code reviews is not an easy argument in most firms. The vendor has no real incentive to ensure all code is secure (google profits vs XSS, or MS profits vs buffer overflows) and thus only has to provide the appearance of security through common and simple measures. The regulatory and standards boards require "base" levels of security to make consumers/end users feel warm and fuzzy thus we implement the cheapest warm and fuzzy methods which is reactive security vs proactive.
    It comes down to where you are in the cycle. If you are a vendor, yes spend more on code reviews and training of your developers and staff. If you are the end user or produce custom applications for your own usage or service offerings (ASP/HSP) then in many cases it makes more sense to off load the burden to the insurance company, meet the basic requirements for compliance (IDS), and call it a day.
    So again, I agree with the theory but no one is producing a solution which will work in practice. Until we have better solutions in practice, I need and IDS to tell me what happened so I can sleep at night knowing I can at least reenact an event.

  2. October 16th, 2007 at 20:28 | #2

    Adam:
    I guess the point is that if nobody invests in promoting theory, the practice will never manifest itself. People (like Spaf) have being doing so for quite some time, and we need this stuff desperately.
    I maybe pissing into the wind, but when I get an email from Spaf encouraging me to keep slogging (as I did,) you can bet I'm going to continue to push the issue.
    I'm not willing to settle and you shouldn't either. It's our responsibility to use the tools we have but fashion the ones we don't.
    Again, keep your firewall. Utilize your IDS. But don't stick your head in the sand…
    There's some major change coming with disruptive innovation that's going to make this battle all the more important. It's going to slip away from us if we don't turn this into a real fight.
    One last thing, making incremental progress is fantastic. We don't have to go from zero to 100% to suggest we're making progress, but we must start to push the point.
    Thanks for your thoughts. You know I respect them immensely — as I do everyone who takes the time to comment…I may not always agree, but I do appreciate the dialog.
    /Hoff

  3. October 16th, 2007 at 20:30 | #3

    "Feel free to tell me I'm full of crap, but I don't think that many can claim to have earned the right to suggest that Spaf has it wrong:"
    earned the right? if he said 2+2=5 would it take an equally rare breed of person to point out that error?
    if you think he's got it right when he calls this 'solving the wrong problem':
    "We spend huge amounts on detecting botnets and worms, and deploying firewalls to stop them, rather than constructing network-based systems with architectures that don’t support such malware."
    then i'll see your 'spaf' and raise you a fred cohen (http://all.net/books/virus/part6.html):
    "It is therefore the major conclusion of this paper that the goals of sharing in a general purpose multilevel security system may be in such direct opposition to the goals of viral security as to make their reconciliation and coexistence impossible."
    (bearing in mind, of course, that the only real difference between viruses and worms concerns the infection of host programs and that that distinction really doesn't matter here)

  4. Adam
    October 16th, 2007 at 20:47 | #4

    Hoff:
    Thanks. I think my point is that I really want to see more people spending time and money on getting to the end solution and not just touting theory.
    Good information as always though.

  5. October 16th, 2007 at 20:49 | #5

    Well, Kurt, respectfully I'd listen to Spaf and consider his words as having more "weight" than mine given practical experience and his pedigree.
    What I'm getting at (by equally inflaming everyone) is that we're all dogs on the Internet, but some people by virtue of being around longer, deserve a little more noodle time that others.
    So, before you go and get your panties in a knot, I was suggesting that despite being in this racket for only 13 years, I'm still a kid and I'd think pretty hard about a point Spaf made before I just up and disagreed with it.
    Same goes for other folks who I respect.
    I think I used the incorrect wording…I'll go choose a more appropriate phrasing.
    I have to be honest with you…I'll have to read that statement regarding Cohen at some time other than 12:48 in the morning…
    Your point is taken. I hope I explained mine.
    /Hoff

  6. October 17th, 2007 at 03:48 | #6

    perhaps i could have made my point using more (security) culturally familiar elements that could be more readily understood at any time of the day or night…
    like, there is no panacea, or if it sounds too good to be true then it probably is… to me, a true 'solution' to a security problem (rather than a business problem involving security, which is what most so-called security solutions solve) such as that seems too good to be true…
    (i'm probably just as much a kid as you if not more so, but i don't put much credence in names, only ideas…)

  7. October 17th, 2007 at 04:35 | #7

    Kurt:
    I didn't highlight Spaf's post solely because of his "name" and it's certainly not because I'm hoping to see him on "Dancing with the Stars."
    I referenced his writing precisely because of his IDEAS and because I believe they are sound, intriguing and correct. He's also damned smart, contributed immensely to our profession and knows more than I've forgotten about IA/IS.
    It's called respect.
    Enough already. I've corrected my poor choice of words and I thank you for pointing them out.
    There's no panacea being floated here, just the concepts related to the foundational set of problems that need to be solved. They aren't "too good to be true" they are "too relevant not to be addressed."
    Thanks,
    /Hoff

  8. Andrew Yeomans
    October 17th, 2007 at 05:17 | #8

    As I implied in my soundbite http://www.techworld.com/security/features/index…. there is also an economic argument.
    *All being equal*, it's usually easier and cheaper to fix a coding flaw rather than put in another layer of complexity trying to detect it. Especially as that bit of flawed code knows the context it is in, while the detection code effectively has to model the whole environment so it doesn't make mistakes. So fixing the flaw is likely to be far less lines of code and far simpler to implement.
    Now in the real world, we don't always have access to the source code or the ability to fix it, so it might only be possible to patch around it, as replacing the code is just too expensive. Unfortunately that leads to some vendors following the money, providing a service that should not be required in my utopia. And it's a nice revenue stream, with no incentive to fix the root cause.
    It makes a nice game when going round security shows, to count the ratio of how many people are providing security vs patching up other vendors' defects.
    To save someone pointing it out, the economics of fixing the flaw only apply when we are discussing the same domain (software). When the security depends on a mix of people, process and technology, we'll need layers of complexity where one domain monitors another, such as software monitoring people and vice versa.

  9. October 17th, 2007 at 05:28 | #9

    Great discussion(s).
    Stay on the course Chris. Once in a while something needs to nudge us out of our comfort zones or we do not grow, and truth, while it sometimes may hurt a bit, is a great nudger.
    The operative word in Cohen's conclusion (20+ years ago ), "… the goals of sharing in a general purpose multilevel security system may be in such direct opposition to the goals of viral security as to make their reconciliation and coexistence impossible" are MAY BE.
    In the link he provided, Cohen also writes that that "integrity control must be considered an essential part of any secure operating system".
    You may recall that Trustifier provides for both secrecy and integrity, as discussed in the security model (I sent you the link). However, Trustifier defends against malware more in the way suggested above by Spaf:
    "I'm not referring to making existing systems trustworthy, (…which I don't think can succeed)…rather than constructing network-based systems with architectures that don't support such malware.
    With the same technology that DOES make existing systems trustworthy, Trustifier changes the nature of the kernel to make it foreign to malware, changing the rules that would enable them to execute. Our first customer back in 2003 or so, avoided the downtime and clean-up from the last major worm outbreak wherever Trustifier was implemented, and only became aware of it when off-main sites and vendor partners not using it went down. Thus, a fine-grain data access and audit control system protected against a major malware outbreak because of the controls it added to the kernel.
    Not my intention to make this a commercial for Trustifier, however I attempt to repond whenever I see people say things that we do now can't be done (even if they don't listen), and will continue to do so.
    Cheers.

  10. October 17th, 2007 at 05:31 | #10

    Ooo, it's the Appeal to Authority! ;-) As cute and cuddly as Spaf is, and as wise as he is, I'm sure he'd agree that we shouldn't kick our DMZs to the curb just because we haven't been solving the underlying problems all this time. Yes, we need to start doing things right in software and systems. I don't think anyone will argue with that. But until my systems are completely rebuilt to the New Spaf Paradigm, excuse me while I continue to nurse my ACLs along.

  11. October 17th, 2007 at 08:16 | #11

    @chris hoff:
    "There's no panacea being floated here"
    yes there is, check the quote from my first comment… a system that 'doesn't support such malware' is effectively a panacea within the scope of such malware…
    "They aren't "too good to be true" they are "too relevant not to be addressed.""
    any true solution to the malware problem or any significant subset thereof (like the worm problem) is too good to be true…

  12. October 17th, 2007 at 08:47 | #12

    @rob lewis:
    "The operative word in Cohen's conclusion (20+ years ago ), "… the goals of sharing in a general purpose multilevel security system may be in such direct opposition to the goals of viral security as to make their reconciliation and coexistence impossible" are MAY BE."
    nice of you to focus on those two words…
    in order for computers to be useful to us in a general purpose sort of way we need to be able to share things… since determining whether a shared item will represent code on the destination system is reducible to the halting problem, and determining whether that code is malicious is also reducible to the halting problem, i think it's safe to replace "may be" with "is"…

  13. October 17th, 2007 at 09:58 | #13

    @Kurt
    Now you're just trying to pick a fight. Fine. You're bring incredibly myopic and, quite frankly, choosing to ignore trends which point to the inaccuracy of your assertion.
    You wish to dismiss and/or discredit the greater message to otherwise focus on ranting about how you maintain the malware can't be solved at the OS level. OK.
    Just so we're clear, you're asserting that an operating system cannot be constructed to be resilient enough to resist malware? Even as an incremental improvement?
    Let's take OS' such as Vista and OS X Leopard as examples…
    So, components like UAC and Address Space Library Randomization (ASLR) don't help? Or, if we stop a minute and look past that tree you're trying to chop down and admire the forest for a minute, how about the memory isolation capabilities inherent in Hypervisor-based systems — you know, like the kind we're going to end up with in either our desktops or virtualized desktops.
    Like I said, I'll go back and read that Cohen paper you referenced in more detail, but I must say that while some things haven't changed in 20 years, a lot has — some of those soundbites in Cohen's papers aren't current or have been chipped away at.
    I admire Cohen's contributions the same as I do Spaf's, and I've already admitted to a post-midnight poor choice of words. I'm not apt to keep paying for sins I've already confessed to and repented against ;)
    I'm still not sure what you're arguing here.
    /Hoff

  14. October 17th, 2007 at 11:44 | #14

    @christofer hoff:
    "Now you're just trying to pick a fight."
    actually, no, i'm not… i have tried to avoid my earlier side-remarks that ultimately got misinterpreted and lead to friction (eg. i don't not respect spafford, i just consider his identity to be a non sequitur when evaluating the merit of the ideas put forward), but at no time was it my intent to start a fight…
    "You wish to dismiss and/or discredit the greater message to otherwise focus on ranting about how you maintain the malware can't be solved at the OS level."
    i don't mean to dismiss the entire greater message – i'm sure that there are all sorts of security problems that could be solved if we just worked on the right problems as spafford suggests, but malware isn't one of them… that doesn't mean there was no value in his rant, but rather that it was simply overly broad…
    "Just so we're clear, you're asserting that an operating system cannot be constructed to be resilient enough to resist malware? Even as an incremental improvement?"
    there's a huge difference between resisting malware and not supporting malware… and i'm not sure what you mean by 'an incremental improvement' (unless you're glossing over the binary nature of "not supporting" and working from a perspective of gradients instead)…
    "So, components like UAC and Address Space Library Randomization (ASLR) don't help?"
    UAC is about least privilege / separation of privilege… cohen's experiments demonstrate that those do not necessarily stop viruses (and the demonstration works quite well for many other malware types)… locking down privileges interferes with certain types of file system (or other privileged resource) related payloads and of course most conventional attempts at malware persistence, but neither persistence nor those payloads are necessarily requirements for malware…
    ASLR interferes with the exploitation of certain types software flaws, but once again such exploitation isn't necessarily a requirement for malware…
    these both interfere with numerous instances of current malware, but only because those instances were implemented with a different operating environment in mind… any change in the operating environment can potentially have a negative effect on existing malware but that only lasts as long as it takes for malware creators to adapt and design their wares for the new environment – which leaves us still needing to spend gobs of money on detectors and firewalls and the like…
    "how about the memory isolation capabilities inherent in Hypervisor-based systems"
    i'm sure there are instances of malware that would interfere with, but do you think there's a *class* of malware that that would fundamentally inhibit?
    "I've already admitted to a post-midnight poor choice of words. I'm not apt to keep paying for sins I've already confessed to and repented against ;)"
    and i have no interest in holding your feet to the fire over that issue… i made a brief quip about it at the beginning, that's all…

  15. October 17th, 2007 at 22:33 | #15

    @Kurt,
    "we need to be able to share things…since determining whether a shared item will represent code on the destination system is reducible to the halting problem, and determining whether that code is malicious is also reducible to the halting problem, i think it's safe to replace "may be" with "is"…
    I believe that Chris's diagnosis of myopia may be accurate, for your statement is clearly based on the assumption that all implementations of MLS, now and in future, will be the same as those used by Cohen ( and because he said so) all those years ago. That precludes that any innovation in this area is possible.
    I can not grant you free license to simply assume that because Trustifier provides MLS it inhibits information sharing. The concepts of TOS/MLS are not new, but Trustifier's implementation of them is what is innovative, Hence the problem is not reducable to a halting problem at the code level since it is not a pattern matcher, but a behavior enforcer. Therefore Trustifier is able to stop unknown executables (eg. virii) and unauthorized use of data. To the business world and the information survivalist, is this not the bottom line?
    Which brings me to this point you made:
    "a true 'solution' to a security problem (rather than a business problem involving security, which is what most so-called security solutions solve)"
    Why would ANY resources be spent on any problem that was not a business problem, (economic, CIA, or otherwise). To suggest otherwise would suggest a tunnel-visioned obsession with some theoretical exercise but no real-world purpose or value creation.

  16. October 17th, 2007 at 22:43 | #16

    Due to the late hour, the last should sentence in my first paragraph should say:
    "That precludes that innovation in this area will NOT be possible,(ever). "

  17. October 18th, 2007 at 07:25 | #17

    @rob lewis:
    "I believe that Chris's diagnosis of myopia may be accurate, for your statement is clearly based on the assumption that all implementations of MLS, now and in future, will be the same as those used by Cohen ( and because he said so) all those years ago. That precludes that any innovation in this area is possible."
    cohen's reference to multi-level security systems was not to specific implementations of the day but rather to fundamental models (such as bell-lapadula) which he talks about by name elsewhere in his paper… for those interested, the index to all parts of the paper is at http://www.all.net/books/virus/index.html – unfortunately there's no link back to the index from the page i previously referenced…
    as for my statement that you quoted, it sidesteps the issue of multi-level security systems entirely because all they are capable of doing in the final analysis is providing partitions across which infection cannot pass… within the boundaries of those partitions, MLS's have no effect…
    "Why would ANY resources be spent on any problem that was not a business problem, (economic, CIA, or otherwise). To suggest otherwise would suggest a tunnel-visioned obsession with some theoretical exercise but no real-world purpose or value creation."
    i don't mean technical problems with business implications – by business problem i was referring to such things as 'i need to deploy technology X to meet requirements from Y'… that is the context in which most so-called security 'solutions' solve problems and i dare say (getting back on the original topic) that those are the wrong sorts of problems for the security industry as a whole to be focused on…

  18. October 19th, 2007 at 10:33 | #18

    @KURT
    "as for my statement that you quoted, it sidesteps the issue of multi-level security systems entirely because all they are capable of doing in the final analysis is providing partitions across which infection cannot pass… within the boundaries of those partitions, MLS's have no effect…"
    Trustifier is unlike any other technology ever seen before. It does not create partitions. It enforces data and system access requests on a per transaction basis, acting at the kernel level to digitally provide domain separation. Thus any malware that tries to execute anywhere under Trustifier's control will not be allowed to if it violates the behavior/policy, even unknown ones, as it is default deny.

  19. October 19th, 2007 at 15:11 | #19

    @rob lewis:
    "Trustifier is unlike any other technology ever seen before. It does not create partitions. "
    if it doesn't define multiple levels of security then i suspect it doesn't qualify as a multi-level security system…
    from the rest of your description it sounds like a behavioural whitelist, which means rather than being a system that doesn't support worms or botnets, it's merely a system that hopes to block worm or botnet activity (subject to the limits of our abilities to detect behaviours and define which are allowed and which aren't)…

  20. October 19th, 2007 at 17:02 | #20

    You two want to get a room?
    ;)
    /Hoff

  21. October 21st, 2007 at 09:25 | #21

    Naw, Kurt is myopic and I'm nothin special to look at. No reason to get that close. :)
    @Kurt one last time
    Trustifier is not "merely a system that hopes to block worm or botnet activity (subject to the limits of our abilities to detect behaviours and define which are allowed and which aren't)…" it DOES block unauthorized malware activity. When the basis for your rule set maps directly to your business data flows (users, groups, and roles), it is intuitively easier to set up your internal controls.
    The thing about trustworthy systems are that one must know when, where and how much one can trust their system/data and where one can not. That is the purpose of Trustifier, to provide and insert deterministic control inside the network where it has been lacking before.
    I don't know how the field of IT security will ever progress in the case of any real innovation if the blanket response is always that "it can't be done", with out even seeing the model or trying to figure it out. (??????)

  22. October 21st, 2007 at 15:18 | #22

    it's funny how i get accused of trying to start a fight when i'm the one who *isn't* passing judgments on people…
    @rob lewis:
    "Trustifier is not "merely a system that hopes to block worm or botnet activity (subject to the limits of our abilities to detect behaviours and define which are allowed and which aren't)…" it DOES block unauthorized malware activity."
    the operative word being "unauthorized" – it doesn't do anything about authorized malware activity, and that is the manifestation of what i was referring to as a limits of our abilities to define which behaviours are allowed and which aren't…
    it also doesn't do anything about undetected behaviours (curse that halting problem)…
    stopping malware behaviour and stopping unauthorized malware behaviour are not the same thing, nor is it likely that they can be made the same thing…
    "When the basis for your rule set maps directly to your business data flows (users, groups, and roles), it is intuitively easier to set up your internal controls."
    ok, but doesn't that mean that the basis for your rule set maps onto an np-complete problem?
    "I don't know how the field of IT security will ever progress in the case of any real innovation if the blanket response is always that "it can't be done", with out even seeing the model or trying to figure it out. (??????)"
    something can fail to completely fulfill it's claims and still be better than what came before… it's still progress in that case, but claims should stay within the realm of what's actually possible lest we start selling snake-oil… i think we can all agree that that is something to be avoided…

  23. Eric Marceau
    May 13th, 2008 at 15:46 | #23

    Truly, the paradigm used by Trustifier is unlike any other in industry. It has been demonstrated, under actual operating conditions, at customer sites and major government contractors, that its functionality WILL
    a) block known threats; and
    b) block unknown threats.
    It achieves this by requiring each "user" (normal, security, administrator, manager) to comply with a permission profile (as previously stated Trustifier default is DENY, so anything that is to be attempted within its "protected host environment" needs to be specifically documented for each such user). Regardless of nature of malware, unless the sysadmin is specifically allowed to "destroy" or "corrupt" any file/folder, such needs to be specifically allowed by the "manager" who, under Trustifier, can be assigned the function of assigning limited priviledges to the sysadmin. NOTE that this is FAR superior to the environment where the control of priviledge assignment is in the SAME hands as the person who performs the necessary maintenance activities. It is separation of priviledges as per definition of business roles which, until the availability of Trustifier, was ONLY achieved by rebuilding the OS. The "beauty", dare I say elegance, of Trustifier is that it can be, quite literally, dropped into an installation having previously defined groups of users, and their priviledges can be "defined" at a group level, as well as modified uniquely for each individual within that group, as and when required.
    If you wish to pursue a deeper analysis, it would be best to enter directly into a detailed technical question with Googgun Technologies' CTO, who would be more than happy to answer all your questions and WILL, I am sure, convert you to the stance that Rob has previously attempted to put forward.
    You may wish to review their technical document available at: http://www.googgun.com/pdf/gti_trustifier_design….
    As EVERYONE knows, Windows technology is proprietary and they protect their IP (intellectual property) with senseless abandon. By implementing Googgun's "Trustifier Cocoon" or "Trustifier Sanctity", among others, all activity on an intranet can be protected from external/internal threats in a manner that CANNOT be circumvented.
    Again, no need to take my word for it. Get it from the "horse" itself: Googgun's CTO. In this case, you would need to characterize that individual as a "workhorse", given his ability to mobilize his organization to deliver the products that are currently available, at the quality that they demonstrably incorporate.
    Good luck in your future investigations and discussions.
    Eric

  24. Eric Marceau
    May 16th, 2008 at 14:45 | #24

    I recognize that this discussion has been shelved for some time now, but I believe that it ended with an inappropriate conclusion based on some improper/unverfified assumptions.
    The problem seems to be that people don't fully understand how different Trustifier is, as compared to other security wares. You have to specifically give authorization for actions to take place.
    Now I see that you take exception with the above because "authorized malware", as a conceptual vehicle, cannot be blocked. I believe there is an underlying assumption here that needs to be addressed.
    **ASSUMPTION: Authorization of script IMPLIES global authorization of all actions stated in script, regardless of intended scope of script at the outset.**
    This is EXACTLY where Trustifier demonstrates its superiority from others, in that the above implication is invalid. Why is that? It is because, with Trustifier, context control is granularized, by user, by operation, by object, with each of these three having properties that DO preclude generalized scenarios, regardless of who attempts what by whatever mechanism that may at first glance have a semblance of priviledge.
    Under Trustifier, "super-user" no longer has all-encompassing priviledges. "Super-user" no longer has the latitude to act with impunity. He is limited by system global, or context-specific, rules which, without explicit assignment of priviledge, precludes any possible success. Understand that context-based rules can ONLY expand priviledges. There is NO need for context-based rules to LIMIT priviledges. This is the MIL requirement and this is what Trustifier delivers.
    Once you fully understand this, and what it implies, you will begin to understand why Trustifier can make its claims. Also, due to the revolutionary nature of its approach, it is understandable that Googgun would want to protect its IP at all cost, especially given the potential revenues that are at stake for an "ultimate" technology, that should in fact be labelled a "terminal" technology, in that, IMHO, it is the best-fit solution, bar none for the present level of computing technologies. This is because Trustifier acts at the point where "the rubber hits the road".
    It MAY need a revisit when quantum computing becomes commercialized. That is NOT to suggest that there must, by necessity, BE a need to change. Time will tell.

  25. May 16th, 2008 at 15:51 | #25

    Eric:
    I appreciate your effort in commenting, but I keep un-publishing your comments because you make them nothing but a giant ad for your company; I asked Rob to debate points without becoming a commercial. Your posts are a commercial.
    I don't mind a "Hey, we do things differently, contact me for more info." w/a small description, but you continue to overstep the bounds.
    Please stop.

  26. Eric Marceau
    May 16th, 2008 at 16:24 | #26

    It seems to me that it might have been better to simply edit out the "commercial" part, which was the tail end. The beginning part was completely objective and appropriate, addressing the fundamental misunderstanding of how it performs. It is most unfortunate that you did not see the value in sharing that with others who might wish to know. I believe it was a disservice to your readership.

  27. Eric Marceau
    May 16th, 2008 at 16:32 | #27

    BTW, I am not an employee of GTI. I am an independant consultant helping people deal with regulatory headaches.
    Is there a problem with an independant wishing to act for a champion of a worthy cause, based on merits?

  28. May 16th, 2008 at 19:04 | #28

    Eric:
    Thanks for clarifying your relationship with Trustifier, however I've made myself clear. You're not championing an idea or a "cause," you're promoting a product; one that's been discussed here on multiple occasions.
    I don't edit people's comments for content by picking out bits and pieces. Besides which, if I did, there'd be about 3 sentences left of your multi-paragraph posts.
    I've spoken about Trustifier many times, Rob from Googun comments here often and he's more restrained than you are.
    This post isn't about Googun. It's not about Trustifier, neither is the post on MLS. If it were, I'd have no problem about discussing it, but it's distracting from the real "cause."
    Thanks,
    /Hoff

  29. Eric Marceau
    May 20th, 2008 at 10:28 | #29

    I readily concede that this blog is not about Googgun or Trustifier, or MLS. It is about ensuring we all have a comprehensive characterization of the threat environment and of the full risk spectrum, as well as the full awareness of what options truly reduce info/tech risk in the current/future unpredictable environment, allowing us to survive comfortably with an tolerable level of risk.
    Do you not agree that those 3 sentences from my censored post clarified a critical misunderstanding, and that it is in the best interest for your readership to be made aware of those, in order to be fully briefed, so that they may make an informed decision?
    Do you not agree that having clarified the misunderstanding, the product that offers the potential benefits which are implied, should be given a fair hearing, rather than saying that someone else's product, which was not yet come out of the R&D silo, should be given a label of "great potential", when we all know that such promises have all been shown, in the past, to be nothing more than vapourware, even if it does have some government involvement during the development cycle?
    I perceive a double standard, especially when the reference to one product was that of "too late in the game". If it was too late for the one, surely the other one must be still-born. OR if the one under development has "potential", surely "a bird in the hand is better than two in the bush", and the COTS must be given due consideration, if for no other reason than that due diligence requires its evaluation on the basis of immediate availability, containment of threat/risk and savings (if development of proprietary/customized solution was being considered)!
    One factor indicative of a solution's success rate is the number of installations which, having once adopted a security architecture, abandon it for a another (hopefully better). It would prove a valuable service to all if someone were to begin tracking such statistics independantly of the vendor, in a manner that can ensure no misrepresentations by "fake customers". Such statistics would be quite telling. A question that should be asked is whether such installations would adopt a different security architecture if a "better" one was available. I would also suggest that lead-time between decision and full implementation should also be one of the considerations, as well as how much install-time customization was required.
    Those 4 factors, assuming all threats are addressed, should be quite revealing as to a given security architecture's ability to adapt to evolving hazardous operational environments, internal or external.

  1. No trackbacks yet.