How to Kick Ass in Information Security — Hoff’s Spritually-Enlightened Top Ten Guide to Health, Wealth and Happiness

June 24th, 2007 8 comments

10commandments
I’ve spent a while in this business and have been doing time on planet Earth in a variety of roles in the security field; I’ve been a consumer, a CISO, a reseller, a service provider, and a vendor, so I think I have a good sense of shared empathy across the various perspectives that make up the industry’s collective experience.

I get to spend my time traveling around the world speaking to very smart people; overworked, tired, cynical, devoted, and fanatical security folks who are all trying to do the right thing within the context of the service they provide their respective businesses and customers.

A lot of them are walking around in a trance however, locked into the perpetual hamster wheel of misery that many will have you believe is all security can ever be.  That’s bullshit.  I love my job; I’ve loved every one of them in this space.  They have all had their ups and downs, but I know that I’ve made a positive difference in every one because I believe in what I’m doing and more importantly I believe in how I’m doing it.   If you want to manifest misery, then you will.  If you want to change the way security is perceived, you will.

Most of the people I speak to all have the identical set of problems and for some reason seem to be stuck in the same pattern and not doing much about trying to solve them.  Now, I’m not going to try and get all preachy, but when I hear the same thing over and over, up and down the stack from the Ops trenches to the CSO and nobody seems to be able to gain traction towards a solution, I’m puzzled as to whether it’s the problem or the answer people are seeking.

In many cases, people feel the need to solve problems themselves.  It’s the classic “Dad won’t pull into the gas station to ask directions when he’s lost” syndrome.  Bad form.   Let’s just pull over for a second and see if we can laugh this thing off and then get back on the road with a map.

I thought that I’d summarize what I’ve heard and articulate it with my top ten things that anyone who is responsible for architecting, deploying, managing and supporting an information security program should think about as they go about their jobs.   This isn’t meant to compete with Rothman’s Pragmatic CSO book, but if you want to send me, say, half the money you would have sent him, I’m cool with that.

These are not in any specific order:

1.    Measure Something
I don’t care whether you believe in calling this “metrics” or not.  If you’ve got a pulse and a brain (OK, you probably need both for this) then you need to recognize that the axiom “you can’t manage what you don’t measure” is actually true, and the output – no matter what you call it – is vitally important if you expect to be taken seriously.

Accountants have P&L statements because they operate around practices that allow them to measure the operational integrity and fiscal sustainability of a business.  Since security is functional service mechanism of the business, you should manage what you do as a business.

I’m not saying you need to demonstrate ROI, ROSI, or RROI, but for God’s sake, in order to gauge the efficiency, efficacy and investment-worthiness of what you’re doing, you need to understand what to focus on and what to get around to when you can spare cycles.  Be transparent about what you’re doing and why to management.  If you have successes, celebrate them.  If you have failures, provide a lessons-learned and move on.

You don’t need a degree in statistics, either.  If you want some good clue as to what you can easily do to start off measuring and reporting, please buy this.  Andy Jaquith, while stunningly handsome and yet quaintly modest (did I say that correctly, Andy?) knows his shizzle.

2.    Budget Isn’t Important
That’s right, budget isn’t important, it’s absolutely everything.   If you don’t manage your function like it is a business burning your own cash then you won’t survive over the long term.  Running a business takes money.  If you don’t have any, well…  As my first angel investor, Charles Ying taught me, “Cash is King.”   I only wish I learned this and applied it earlier.

If you lead a group, a team or a department and you come to the second budget cycle (the first you probably had no control over since you inherited it) under your watch and you open the magic envelope to discover that you don’t have the budget to execute on the initiatives in your security program that align to the initiatives of supporting the business, then quit.

You should quit because it’s your fault. It means you didn’t do your job.  It means you’re not treating things seriously as a set of business concerns.

Whether you’re in a downcycle budget-cutting environment or not, it’s your job to provide the justification and business-aligned focus to get the money you need to execute.  That may mean outsourcing.  That may mean you do more with less.  That may mean that you actually realize that there tradeoffs that you need to illustrate which indicate risk, reward and investment strategies and let someone else make the business decision to fund them or not.

Demonstrate what you can offer the business from your security portfolio and why it’s worth investing in.  You won’t be able to do everything.  Learn to stack the deck and play the game.  Anyone who tells you that a budget cycle isn’t a game is (1) a lousy liar, (2) someone who doesn’t have any budget and (3) nobody you need to listen to.

3.    Don’t Be a Technology Crack-Whore
If you continue to focus on technology to solve the security “problem” without the underlying business process improvement, automation and management & measurement planes in place to demonstrate what, why and how you’re doing things, then you’re doomed.   I’m not going to re-hash the ole “People, Process and Technology” rant as that’s overplayed.

Learn to optimize.  Learn to manage your security technology investments as a portfolio of services that can be cross-functionally leveraged across lines of business and operationalized and cost-allocated across IT.

Learn to recognize trends and invest your time and energy in understanding what, if anything, technology can do for you and make smart decisions on where to invest; sometimes that’s with big companies, sometimes that’s with emerging start-ups.

Quantify the risk vs. return and be able to highlight the lifecycle of what you expect from a product.  Understand amortization and depreciation schedules and how they affect your spend cycles and synch this to your key vendor’s roadmaps.

If your solutions deliver, demonstrate it.  If they fail, don’t try to CYA, but refer back to the justification, see where it blew a gasket and gracefully move on.  See #1 above.

4.    Understand Risk
Please take the time to understand the word “risk” and it’s meaning(s).  If you continue to overuse and abuse the term in conversation with people who actually have to make business decisions and you don’t communicate “risk” using the same lexicon and vocabulary as the people who write the checks, you’re doing yourself a disservice and you’re insulting their intelligence.

If you don’t understand or perform business impact analyses and only talk about risk within the context of threats and vulnerabilities, you’re going to look like the FUD-spewing technology crack-whore in #3 above.

This will surely be concluded because you sound like all you want is more money (see #2) because you clearly can’t communicate and speak the language that demonstrates you actually understand what and how what you do unequivocally contributes to the business; probably because you haven’t measured anything (see #1)

If you want to learn more about how to understand risk, please read this. Alex Hutton is one wise MoFo.

5.    Network
That’s a noun and a verb.  Please don’t hunker in your bunker.  Get out and talk to your constituents and treat them as valued customers.  Learn to take criticism (see #6) and ask how you’re doing.  By doing that, you can also measure impact directly (see #1.)   You should also network with your peers in the security industry; whether at local events, conferences or professional gatherings, experiencing and participating in the shared collective is critical.

I, myself, like the format of the various “CitySec” get-togethers.  BeanSec is an event that I help to host in Boston.  You can find your closest event by going here.

The other point here is that as budget swings towards the network folks who seem to be able to do a better job at communicating how investing in their portfolio is a good idea (see #1 and #2) you better learn to play nice.  You also better understand their problems (see #6) and the technology they manage.  If you expect to plug into or displace what they do with more kit that plugs into “their” network, you better be competent in their space.  If they’re not in yours, all the better for you.

6.    Shut-up and Listen
Talk with one hole, listen with two.

If I have to explain this point, you’ve probably already dismissed the other five and are off reading your Yahoo stock page and the latest sports scores.  God bless and call me when you start your landscaping business…I need my hedges trimmed.

7.    Paint a Picture
Please get your plans out of your head and written down!  Articulate your strategy and long-term plan for how your efforts will align to the business and evolve over time to mature and provide service to the business.  Keep it short, concise, in “English” and make sure it has pretty pictures.  Circulate it for commentary.  Produce a mantra and show pride in what you do and the value you add to the business.   It’s a business plan.  Sell it and support it like it is.  Demonstrate value (see #1) and you’ll get budget (#2) because it shows that you understand you make business decisions, not technology knee-jerks.

This means that you keep pulse with what technology can offer, how that maps to trends in your business, and what you’re going to do about them with the most efficient and effective use of your portfolio.

Most of this stuff is common sense and you can see what’s coming down the pike quite early if you pay attention.  If you craft your business plan and evolution in stages over time, you’ll look like a freaking prescient genius.  You’ll end up solving problems before they become one.  Demonstrate that sort of track record and you’ll have more runway to do what you want as well as what you need.

8.    Go buy a Car
Used or new, it doesn’t matter.  Why?  Because the guys and gals who sell cars for a living have to deal with schmucks like you all day long and yet they still make six-figures and go home at the end of the day after an 8-10 hour shift and get to ignore the office.  They know how to sell.  They listen (#6,) determine what you have to spend (#2) and then tell you how good you look in that ’84 Sentra and still manage to up-sell you to a BMW M3 with the paddle shifters and undercoating.

You need to learn to sell and market like a car salesman – not the kind that makes you feel sticky, but the kind that you want to invite over to your BBQ because he had your car washed while you waited, brought you coffee and called you back the day after to make sure everything was OK.

Seriously.  Why do you think that most CEO’s were salesmen?  You’re the CEO of the security organization.  Act like it.

9.    Learn to Say “Yes” by saying “No” and vice-versa
Ah, no one word with so few letters inspires such wretched responses from those who hear it.  And Security folks just LOVE to say it.  We say it with such a sense of entitlement and overwhelming omnipotence. too.   We say it and then giggle to ourselves whilst we strike the Dr. Evil pinky pose wearing the schwag-shirt we scored from the $5000 security conference we attended to learn how to more effectively secure the business by promoting security as  an enabler.

It’s OK to say no, just think about how, why and when to say it.  Better yet, get someone else to say it, preferably the person who’s trying to get you to say yes.  Use the Jedi mind-trick.  Learn to sell – or unsell.  This is tricky security ninja skills and takes a while to master.

Having someone justify the business reason, risk and rewards for doing something – like you should be doing – is the best way to have someone talk themselves out of having you do something foolish in the first place.  You won’t win every battle, but the war will amass less casualties because you’re not running over every hill lobbing grenades at every request.

10.    Break the Rules
Security isn’t black and white.  Why?  Because despite the fact that we have binary compute systems enforcing the rules, those who push the limits use fuzzy logic and don’t concern themselves with the constraints of 1 and 0.   You shouldn’t, either.

Think different.  Be creative.  Manage risk and don’t be averse to it because if you’re running your program as a business, you make solid decisions based on assessments that include the potential of failure.

Don’t gauge success by thinking that unless you’ve reached 100% that 80% represents failure.  Incremental improvement over time – even when it’s not overtly dramatic – does make a difference.  If you measure it, by the way, it’s clearly demonstrable.

Challenge the status quo and do so with the vision of fighting the good fight – the right one for the right reasons – and seek to improve the health, survivability, and sustainability of the business.

Sometimes this means making exceptions and being human about things.  Sometimes it means getting somebody fired and cleared out of their cube.  Sometimes it means carrot, sometimes stick.

If you want to be a security guard, fine, but don’t be surprised when you get treated like one.  Likewise, don’t think that you’re entitled to a seat at the executive table just because you wear a tie, play golf with the CFO, or do the things on this list.

Value is demonstrated and trust is earned.   Learn to be adaptive, flexible and fair — dare I say pragmatic, and you’ll demonstrate your value and you’ll earn the trust and confidence of those around you.

So there you go.  One Venti-Iced-Americano inspired “Hoff’s giving back” rant. Preachy, somewhat cocky and self-serving?  Probably.  Useful and proven in battle?  Absolutely.   If anyone tells you any different, please ask them why they’re reading this post in the first place.

Think about this stuff.  It’s not rocket science.  Never has been.  Most of the greatest business people, strategists, military leaders, and politicians are nothing more than good listeners who can sell, aren’t afraid of making mistakes, learn from the ones they make and speak in a language all can relate to and understand.  They demonstrate value and think outside of the box; solving classes of problems rather than taking the parochial and pedestrian approach that we mostly see.

You can be great, too.  If you feel you can’t, then you’re in the wrong line of work.

/Hoff

The 4th Generation of Security Devices = UTM + Routing & Switching or New Labels = Perfuming a Pig?

June 22nd, 2007 5 comments

That’s it.  I’ve had it.  Again.  There’s no way I’d ever make it as a Marketeer.  <sigh> Pig_costume1_2

I almost wasn’t going to write anything about this particular topic because my response can (and probably should) easily be perceived as and retorted against as a pissy little marketing match between competitors.  Chu don’t like it, Chu don’t gotta read it, capice?

Sue me for telling the truth. {strike that, as someone probably will}

However, this sort of blatant exhalation of so-called revolutionary security product and architectural advances disguised as prophecy is just so, well, recockulous, that I can’t stand it.

I found it funny that the Anti-Hoff (Stiennon) managed to slip another patented advertising editorial Captain Obvious press piece in SC Magazine regarding what can only be described as the natural evolution of network security products that plug into — but are not natively — routing or switching architectures.

I don’t really mind that, but to suggest that somehow this is an original concept is just disingenuous.

Besides trying to wean Fortinet away from the classification as UTM devices (which Richard clearly hates
to be associated with) by suggesting that UTM should be renamed as "Flexible Security Platform," he does a fine job of asserting that a "geologic shift" (I can only assume he means tectonic) is coming soon in the so-called fourth generation of security products.

Of course, he’s completely ignoring the fact that the solution he describes is and has already been deployed for years…but since tectonic shifts usually take millions of years to culminate in something noticeably remarkable, I can understand his confusion.

As you’ll see below, calling these products "Flexible Security Platforms" or "Unified Network Platforms" is merely an arbitrary and ill-conceived hand-waving exercise in an attempt to differentiate in a crowded market.  Open source or COTS, ASIC/FPGA or multi-core Intel…that’s just the packaging and delivery mechanism.  You can tart it up all you want with fancy marketing…

It’s not new, it’s not revolutionary (because it’s already been done) and it sure as hell ain’t the second coming.  I’ll say it again, it’s been here for years.  I personally bought it and deployed it as a customer almost 4 years ago…if you haven’t figured out what I’m talking about yet, read on.

Here’s how C.O. describes what the company I work for has been doing for 6 years and that he intimates Fortinet will provide that nobody else can:

We are rapidly approaching the advent of the fourth generation
security platform. This is a device that can do all of the security
functions that are lumped in to UTM but are also excellent network
devices at layers two and three. They act as a switch and a router.
They supplant traditional network devices while providing security at
all levels. Their inherent architectural flexibility makes them easy to
fit into existing environments and even make some things possible that
were never possible before. For instance a large enterprise with
several business units could deploy these advanced networking/security
devices at the core and assign virtual security domains to each
business unit while performing content filtering and firewalling
between each virtual domain, thus segmenting the business units and
maximizing the investment in core security devices.

One geologic
shift that will occur thanks to the advent of these fourth generation
security platforms is that networking vendors will be playing catch up,
trying to patch more and more security functions into their
under-powered devices or complicating their go to market message with a
plethora of boxes while the security platform vendors will quickly and
easily add networking functionality to their devices.

Fourth
generation network security platforms will evolve beyond stand alone
security appliances to encompass routing and switching as well. This
new generation of devices will impact the networking industry it
scrambles to acquire the expertise in security and shift their business
model from commodity switching and routing to value add networking and
protection capabilities.

Let’s see…combine high-speed network processing whose routing/switching architecture was designed by the same engineers that designed Bay/Welfleet’s core routers, add in a multi-core Intel processing/compute layer which utilizes virtualized, load-balanced security applications as a  service layer that can be overlaid across a fast, reliable, resilient and highly-available network transport and what do you get?

X80angled_2This:

Up to 32 GigE or 64 10/100 switching ports and 40 Intel cores in a single chassis today…and in Q3’07 you’ll also have the combination of our NextGen network processors which will provide up to 8x10GigE and 40xGigE with 64 MIPS Network Security cores combined with the same 40 Intel cores in the same chassis.

By the way, I consider that routing and switching are just table stakes, not market differentiators; in products like the one to the left, this is just basic expected functionality.

Furthermore, in this so-called next generation of "security switches," the customer should be able to run both open source as well as best-in-breed COTS security applications on the platform and not constrain the user to a single vendor’s version of the truth running proprietary software.

—–

But wait, it only gets better…what I found equally as hysterical is the notion that Captain Obvious now has a sidekick!  It seems Alan Shimel has signed on as Richard’s Boy Wonder.  Alan’s suggesting that again, the magic bullet is Cobia and that because he can run a routing daemon and his appliance has more than a couple of ports, it’s a router and a switch as well as a multi-function UTM UNP swiss army knife of security & networking goodness — and he was the first to do it!  Holy marketing-schizzle Batman! 

I don’t need to re-hash this.  I blogged about it here before.

You can dress Newt Gingrich up as a chick but it doesn’t mean I want to make out with him…

This is cheap, cheap, cheap marketing on both your parts and don’t believe for a minute that customers don’t see right through it; perfuming pigs is not revolutionary, it’s called product marketing.

/Hoff

United’s entire flight control network down?

June 20th, 2007 No comments

Parkedplane
I’m sitting on the tarmac at Logan in an A320.  I’ve been sitting here for almost an hour behind a fleet of other united planes.
According to the pilot, United has experienced a system-wide computer outage that affects the navigational systems of all planes.
We can’t take off because the plane doesn’t know where to go…and neither does the pilot.
So much for triple redundancy!

Hoff

** Update: I guess he wasn’t kidding!  That’s realtime blogging for you folks! 

I blogged this from my phone via email whilst the failure occurred.  The good news is that the delay rippled through the entire schedule, so my connector in Denver to Oakland was also delayed, so I made the flight 😉

Here’s a link from Bloomberg as an update regarding the failure:

United Air Says Computer Failure Blocked All Takeoffs (Update5)

By Susanna Ray

      June 20 (Bloomberg) — UAL Corp.’s United Airlines, the
world’s second-biggest carrier, stopped all takeoffs around the
globe for more than two hours today after the failure of the
computer that controls flight operations.         

The outage lasted from 9 to 11 a.m. New York time, delaying
about 268 flights and forcing 24 cancellations, the Chicago-
based airline said. United said it was investigating and hoped
to resume normal operations by tomorrow.         

United relies on the computer that broke down today for
everything needed to dispatch flights, including managing crew
scheduling and measuring planes’ weight and balance, spokeswoman
Robin Urbanski said. Federal law requires weight-and-balance
assessments for passenger flights before takeoff.         

A worldwide grounding from a computer fault is “very
unusual,” said Darryl Jenkins, an independent aviation
consultant in Marshall, Virginia. “Somewhere there was a
massive failure.”         

Delays, Cancellations         

Delays at Chicago’s O’Hare International Airport, the
world’s second-busiest and United’s main hub, averaged one to
two hours, said Wendy Abrams, a spokeswoman for the Chicago
Airport System. Officials opened gates at the international
terminal to unload stranded United passengers.         

United has a backup for its Unimatic system, “and we’re
investigating why that didn’t work,” Urbanski said. Planes
airborne during the breakdown were allowed to keep flying, she
said.         

Preflight weight-and-balance checks are an important safety
step. Improper loading reduces speed, efficiency, climbing rates
and maneuverability, according to a Federal Aviation
Administration handbook. Those changes, combined with abnormal
stresses on an aircraft, can lead to crashes.         

The Unimatic system “handles all the operational parts of
the airline,” said Rick Maloney, a former United vice president
for flight operations who is now dean of the aviation college at
Western Michigan University in Kalamazoo.         

`Well Protected’         

“That system is so well protected,” Maloney said in an
interview. “I’m really pretty surprised.”         

Companywide shutdowns because of computer glitches are
infrequent, said Robert Mann of R.W. Mann & Co., a Port
Washington, New York-based consultant. “But every airline has
been bitten at one time or another by system failures of this
sort, whether they be dispatch, departure control, passenger
service, kiosks, communications, baggage or some other.”         

Today’s delays will add to the industry’s tardiness so far
this year.         

U.S. airlines managed only 72.5 percent of flights on time
this year through April, the worst rate since the federal
government began keeping track in the current format in 1995,
according to the U.S. Bureau of Transportation Statistics.         

Consultants including Jenkins said today’s computer
meltdown shouldn’t damage United’s long-term reputation. “These
are things that you recover from,” he said.         

   
 
      
   
 
 
   
   
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      

Categories: Uncategorized Tags:

Take5- Five Questions for Chris Wysopal, CTO Veracode

June 19th, 2007 No comments

In this first installment of Take5, I interview Chris Wysopal, the CTO of Veracode about his new company, secure coding, vulnerability research and the recent forays into application security by IBM and HP.

This entire interview was actually piped over a point-to-point TCP/IP connection using command-line redirection through netcat.  No packets were harmed during the making of this interview…

First, a little background on the victim, Chris Wysopal:

Wysopalsm
Chris Wysopal is
co-founder and CTO of Veracode. He has testified on Capitol Hill on the subjects of government
computer security and how vulnerabilities are discovered in software. Chris
co-authored the password auditing tool L0phtCrack, wrote the windows version of
netcat, and was a researcher at the security think tank, L0pht Heavy
Industries, which was acquired by @stake. He was VP of R&D at @stake
and later director of development at Symantec, where he led a
team developing binary static analysis technology.

He was influential in
the creation of responsible vulnerability disclosure guidelines and a founder of
the Organization for Internet Safety.  Chris wrote "The Art of
Software Security Testing: Identifying Security Flaws", published by Addison
Wesley and Symantec Press in December 2006. He earned his Bachelor of Science
degree in Computer and Systems Engineering from Rensselaer Polytechnic
Institute.

1) You’re a founder of Veracode
which is described as the industry’s first provider
of automated, on-demand
application security solutions.  What sort of application
security
services does Veracode provide?  Binary analysis, Web Apps?
 
Veracode currently offers binary static analysis of C/C++ applications
for Windows and Solaris and for Java applications.  This allows us to find
the classes of vulnerabilities that source code analysis tools can find but on
the entire codebase including the libraries which you probably don’t have source
code for. Our product roadmap includes support for C/C++ on Linux and C# on
.Net.  We will also be adding additional analysis techniques to our
flagship binary static analysis.
 
2) Is this a SaaS model?
How do you charge for your services?  Do you see
manufacturers
using your services or enterprises?

 
Yes.
Customers upload their binaries to us and we deliver an analysis of their
security flaws via our web portal.  We charge by the megabyte of
code.  We have both software vendors and enterprises who write or outsource
their own custom software using our services.  We also have
enterprises who are purchasing software ask the software vendors to submit their
binaries to us for a 3rd party analysis.  They use this analysis as a
factor in their purchasing decision. It can lead to a "go/no go" decision, a
promise by the vendor to remediate the issues found, or a reduction in price to
compensate for the cost of additional controls or the cost of incident
response that insecure software necessitates.
 
3) I was a Qualys customer
— a VA/VM SaaS company.  Qualys had to spend quite
a bit of time
convincing customers that allowing for the storage of their VA data
was
secure.  How does Veracode address a customer’s security concerns when
uploading their
applications?

We are
absolutely fanatical about the security of our customers data.  I look back
at the days when I was a security consultant where we had vulnerability
data on laptops and corporate file shares and I say, "what were we
thinking?"  All customer data at Veracode is encrypted in storage and at
rest with a unique key per application and customer.  Everyone at Veracode
uses 2 factor authentication to log in and 2 factor is the default for
customers.  Our data center is a SAS 70 Type II facility. All data
access is logged so we know exactly who looked at what and when. As security
people we are professionally paranoid and I think it shows through in the system
we built.  We also believe in 3rd party verification so we have had a top
security boutique do a security review our portal
application.
 
4) With IBM’s acquisition
of Watchfire and today’s announcement that HP will buy
SPI Dynamics, how does
Veracode stand to play in this market of giants who will
be competing to
drive service revenues?

 
We
have designed our solution from the ground up to have the Web 2.0 ease of
use and experience and we have the quality of analysis that I feel is the best
in the market today.  An advantage is Veracode is an independent
assessment company that customers can trust to not play favorites to other
software companies because of partnerships or alliances. Would Moody’s or
Consumer Reports be trusted as a 3rd party if they were part of a big financial
or technology conglomerate? We feel a 3rd party assessment is important in the
security world.
 
5) Do you see the latest
developments in vulnerability research with the drive for
pay-for-zeroday
initiatives pressuring developers to produce secure code out of the box
for
fear of exploit or is it driving the activity to companies like yours?

 
I
think the real driver for developers to produce secure code and for developers
and customers to seek code assessments is the reality that the costs of insecure
code goes up everyday and its adding to the operational risk of companies that
use software.  People exploiting vulnerabilities are not going away
and there is no way to police the internet of vulnerability
information.  The only solution is for customers to demand more secure
code, and proof of it, and for developers to deliver more secure code in
response.

I see your “More on Data Centralization” & Raise You One “Need to Conduct Business…”

June 19th, 2007 1 comment

Pokerhand
Bejtlich continues to make excellent points regarding his view on centralizing data within an enterprise.  He cites the increase in litigation regarding inadequate eDiscovery investment and the increasing pressures amassed from compliance.

All good points, but I’d like to bring the discussion back to the point I was trying to make initially and here’s the perfect perch from which to do it.  Richard wrote:

Christopher Christofer Hoff used the term "agile" several times in his good blog post. I think "agile" is going to be thrown out the window when corporate management is staring at $50,000 per day fines for not being able to produce relevant documents during ediscovery. When a company loses a multi-million dollar lawsuits because the judge issued an adverse inference jury instruction, I guarantee data will be centralized from then forward. "

…how about when a company loses the ability to efficiently and effectively conduct business because they spend so much money and time on "insurance policies" against which a balanced view of risk has not been applied?  Oh, wait.  That’s called "information security." 😉

Fear.  Uncertainty.  Doubt.  Compliance.  Ugh.  Rinse, later, repeat.

I’m not taking what you’re proposing lightly, Richard, but the notion of agility, time to market, cost transformation and enhancing customer experience are being tossed out with the bathwater here. 

Believe it or not, we have to actually have a sustainable business in order to "secure" it. 

It’s fine to be advocating Google Gears and all these other Web 2.0
applications and systems. There’s one force in the universe that can
slap all that down, and that’s corporate lawyers. If you disagree, whom
do you think has a greater influence on the CEO: the CTO or the
corporate lawyer? When the lawyer is backed by stories of lost cases,
fines, and maybe jail time, what hope does a CTO with plans for
"agility" have?

But going back to one of your own mantras, if you bake security into your processes and SDLC in the first place, then the CEO/CTO/CIO and legal counsel will already have assessed the position the company has and balance the risk scorecard to ensure that they have exercised the appropriate due care in the first place. 

The uncertainty and horrors associated with the threat of punitive legal impacts have, are, and will always be there…and they will continue to be exploited by those in the security industry to buy more stuff and justify a paycheck.

Given the business we’re in, it’s not a surprise that the perspective presented is very, very siloed and focused on the potential "security" outcomes of what happens if we don’t start centralizing data now; everything looks like a nail when you’re a hammer.

However, you still didn’t address the other two critical points I made previously:

  1. The underlying technology associated with decentralization of data and applications is at complete odds with the "curl up in a fetal position and wait for the sky to fall" approach
  2. The only reason we have security in the first place is to ensure survivability and availability of service — and make sure that we stay in business.  That isn’t really a technical issue at all, it’s a business one.  I find it interesting that you referenced this issue as the CTO’s problem and not the CIO.

As to your last point, I’m convinced that GE — with the resources, money and time it has to bear on a problem — can centralize its data and resources…they can probably get cold fusion out of a tuna fish can and a blow pop, but for the rest of us on planet Earth, we’re going to have to struggle along trying to cram all the ‘agility’ and enablement we’ve just spent the last 10 years giving to users back into the compliance bottle.

/Hoff

Off to NorCal & Utah this Week /UK & Milan, Italy next week.

June 19th, 2007 No comments

I4
I’m traveling to NorCal and Utah this week for customer visits and then off to the UK and then speak at the I4 forum in Milan, Italy.

I’ll be thinking of you all over a bowl of penne and lovely Barolo.  Ciao!

If you’re in the area, ping me.

/Hoff

Categories: Travel Tags:

Bye Bye, SPI (Dynamics…)

June 19th, 2007 1 comment

Spilogo_3
I think this one has been forecasted about eleventy-billion times already, but HP is acquiring SPI Dynamics.

I think it’s clear why, and HP, just like IBM, are going to use this to drive service revenue.  I wonder when HP’s acqusition of an MSSP will take place to compete with BT’s efforts with Counterpane & INS, etc.?

Well, that leaves only 600+ security companies left in the security consolidation dating pool…

/Hoff

Categories: Web Application Security Tags:

Security Application Instrumentation: Reinventing the Wheel?

June 19th, 2007 No comments

Bikesquarewheel
Two of my favorite bloggers engaged in a trackback love-fest lately on the topic of building security into applications; specifically, enabling applications as a service delivery function to be able to innately detect, respond to and report attacks.

Richard Bejtlich wrote a piece called Security Application Instrumentation and Gunnar Peterson chimed in with Building Coordinated Response In – Learning from the Anasazis.  As usual, these are two extremely well-written pieces and arrive at a well constructed conclusion that we need a standard methodology and protocol for this reporting.  I think that this exquisitely important point will be missed by most of the security industry — specifically vendors.

While security vendor’s hearts are in the right place (stop laughing,) the "security is the center of the universe" approach to telemetry and instrumentation will continue to fall on deaf ears because there are no widely-adopted standard ways of reporting across platforms, operating systems and applications that truly integrate into a balanced scorecard/dashboard that demonstrates security’s contribution to service availability across the enterprise.   I know what you’re thinking…"Oh God, he’s going to talk about metrics!  Ack!"  No.  That’s Andy’s job and he does it much better than I.

This mess is exactly why the SEIM market emerged to clean up the cesspool of log dumps that spew forth from devices that are, by all approximation, utterly unaware of the rest of ecosystem in which they participate.  Take all these crappy log dumps via Syslog and SNMP (which can still be proprietary,) normalize if possible, correlate "stuff" and communicate that something "bad" or "abnormal" has occurred.

How does that communicate what this really means to the business, its ability to function, deliver servce and ultimately the impact on risk posture?  It doesn’t because security reporting is the little kid wearing a dunce hat standing in the corner because it doesn’t play well with others.

Gunnar stated this well:

Coordinated detection and response is the logical conclusion to defense
in depth security architecture. I think the reason that we have
standards for authentication, authorization, and encryption is because
these are the things that people typically focus on at design time.
Monitoring and auditing are seen as runtime operational acitivities,
but if there were standards based ways to communicate security
information and events, then there would be an opportunity for the
tooling and processes to improve, which is ultimately what we need.

So, is the call for "security
application instrumentation"
doomed to fail because we in the security industry will try to reinvent the wheel with proprietary solutions and suggest
that the current toolsets and frameworks which are available as part of
a much larger enterprise management and reporting strategy not enough? 

Bejtlich remarked on the need for mechanisms that report application state must be built into the application and must report more than just performance:

Today we need to talk about applications defending themselves. When
they are under attack they need to tell us, and when they are abused,
subverted, or breached they would ideally also tell us

I would like to see the next innovation be security application instrumentation,
where you devise your application to report not only performance and
fault logging, but also security and compliance logging. Ideally the
application will be self-defending as well, perhaps offering less
vulnerability exposure as attacks increase (being aware of DoS
conditions of course).

I would agree, but I get the feeling that without integrating this telemetry and the output metrics and folding it into response systems whose primary role is to talk about delivery and service levels — of which "security" is a huge factor — the relevance of this data within the visible single pane of glass of enterprise management is lost.

So, rather than reinvent the wheel and incrementally "innovate," why don’t we take something like the Open Group’s Application Response Measurement (ARM) standard, make sure we subscribe to a telemetry/instrumentation format that speaks to the real issues and enable these systems to massage our output in terms of the language of business (risk?) and work to extend what is already a well-defined and accepted enterprise response management toolset to include security?

To wit:

The Application Response Measurement (ARM) standard describes a common
method for integrating enterprise applications as manageable entities.
The ARM standard allows users to extend their enterprise management
tools directly to applications creating a comprehensive end-to-end
management capability that includes measuring application availability,
application performance, application usage, and end-to-end transaction
response time.

Or how about something like EMC’s Smarts:

Maximize availability and performance of
mission-critical IT resources—and the business services they support.
EMC Smarts software provides powerful solutions for managing complex
infrastructures end-to-end, across technologies, and from the network
to the business level. With EMC Smarts innovative technology you can:

  • Model components and their relationships across networks, applications, and storage to understand effect on services.
  • Analyze data from multiple sources to pinpoint root cause problems—automatically, and in real time.
  • Automate discovery, modeling, analysis, workflow, and updates for dramatically lower cost of ownership.

            

…add security into these and you’ve got a winner.   

There are already industry standards (or at least huge market momentum)
around intelligent automated IT infrastructure, resource management and service level reporting.
We should get behind a standard that elevates the perspective of how security contributes to service delivery (and dare I say risk management) instead of trying to reinvent the wheel…unless you happen to like the Hamster Wheel of Pain…

/Hoff

Does Centralized Data Governance Equal Centralized Data?

June 17th, 2007 4 comments

Cube
I’ve been trying to construct a palette of blog entries over the last few months which communicates the need for a holistic network, host and data-centric approach to information security and information survivability architectures. 

I’ve been paying close attention to the dynamics of the DLP/CMF market/feature positioning as well as what’s going on in enterprise information architecture with the continued emergence of WebX.0 and SOA.

That’s why I found this Computerworld article written by Jay Cline very interesting as it focused on the need for a centralized data governance function within an organization in order to manage risk associated with coping with the information management lifecycle (which includes security and survivability.)  The article went on to also discuss how the roles within the organization, namely the CIO/CTO, will also evolve in parallel.

The three primary indicators for this evolution were summarized as:

1. Convergence of information risk functions
2. Escalating risk of information compliance
3. Fundamental role of information.

Nothing terribly earth-shattering here, but the exclamation point of this article to enable a
centralized data governance  organization is a (gasp!) tricky combination of people, process
and technology:

"How does this all add up? Let me connect the dots: Data must soon become centralized,
its use must be strictly controlled within legal parameters, and information must drive the
business model. Companies that don’t put a single, C-level person in charge of making
this happen will face two brutal realities: lawsuits driving up costs and eroding trust in the
company, and competitive upstarts stealing revenues through more nimble use of centralized
information."

Let’s deconstruct this a little because I totally get the essence of what is proposed, but
there’s the insertion of some realities that must be discussed.  Working backwards:

  • I agree that data and it’s use must be strictly controlled within legal parameters.
  • I agree that a single, C-level person needs to be accountable for the data lifecycle
  • However, I think that whilst I don’t disagree that it would be fantastic to centralize data,
    I think it’s a nice theory but the wrong universe. 

Interesting, Richard Bejtlich focused his response to the article on this very notion, but I can’t get past a couple of issues, some of them technical and some of them business-related.

There’s a confusing mish-mash alluded to in Richard’s blog of "second home" data repositories that maintain copies of data that somehow also magically enforce data control and protection schemes outside of this repository while simultaneously allowing the flexibility of data creation "locally."  The competing themes for me is that centralization of data is really irrelevant — it’s convenient — but what you really need is the (and you’ll excuse the lazy use of a politically-charged term) "DRM" functionality to work irrespective of where it’s created, stored, or used.

Centralized storage is good (and selfishly so for someone like Richard) for performing forensics and auditing, but it’s not necessarily technically or fiscally efficient and doesn’t necessarily align to an agile business model.

The timeframe for the evolution of this data centralization was not really established,
but we don’t have the most difficult part licked yet — the application of either the accompanying
metadata describing the information assets we wish to protect OR the ability to uniformly classify and
enforce it’s creation, distribution, utilization and destruction.

Now we’re supposed to also be able to magically centralize all our data, too?  I know that large organizations have embraced the notion of data warehousing, but it’s not the underlying data stores I’m truly worried about, it’s the combination of data from multiple silos within the data warehouses that concerns me and its distribution to multi-dimensional analytic consumers. 

You may be able to protect a DB’s table, row, column or a file, but how do you apply a policy to a distributed ETL function across multiple datasets and paths?

ATAMO?  (And Then A Miracle Occurs) 

What I find intriguing about this article is that this so-described pendulum effect of data centralization (data warehousing, BI/DI) and resource centralization (data center virtualization, WAN optimization/caching, thin client computing) seem to be on a direct  collision course with the way in which applications and data are being distributed with  Web2.0/Service Oriented architectures and delivery underpinnings such as rich(er) client side technologies such as mash-ups and AJAX…

So what I don’t get is how one balances centralizing data when today’s emerging infrastructure
and information architectures are constructed to do just the opposite; distribute data, processing
and data re-use/transformation across the Enterprise?  We’ve already let the data genie out of the bottle and now we’re trying to cram it back in? 
(*please see below for a perfect illustration)

I ask this again within the scope of deploying a centralized data governance organization and its associated technology and processes within an agile business environment. 

/Hoff

P.S. I expect that a certain analyst friend of mine will be emailing me in T-Minus 10, 9…

*Here’s a perfect illustration of the futility of centrally storing "data."  Click on the image and notice the second bullet item…:

Googlegears

Really, There’s More to Security than Admission/Access Control…

June 16th, 2007 2 comments

Wired_science_religion
Dr. Joseph Tardo over at the Nevis Networks Illuminations blog composed a reasonably well-balanced commentary regarding one or more of my posts in which I was waxing on philosophically about about my beliefs regarding keeping the network plumbing dumb and overlaying security as a flexible, agile, open and extensible services layer.

It’s clear he doesn’t think this way, but I welcome the discourse.  So let me make something clear:

Realistically, and especially in non-segmented flat networks, I think there are certain low-level security functions that will do well by being served up by switching infrastructure as security functionality commoditizes, but I’m not quite sure for the most part how or where yet I draw the line between utility and intelligence.  I do, however, think that NAC is one of those utility services.

I’m also unconvinced that access-grade, wiring closet switches are architected to scale in either functionality, efficacy or performance to provide any more value or differentiation other than port density than the normal bolt-on appliances which continue to cause massive operational and capital expenditure due to continued forklifts over time.  Companies like Nevis and Consentry quietly admit this too, which is why they have both "secure switches" AND appliances that sit on top of the network…

Joseph suggested he was entering into a religious battle in which he summarized many of the approaches to security that I have blogged about previously and I pointed out to him on his blog that this is exactly why I practice polytheism 😉 :

In case you aren’t following the
religious wars going on in the security blogs and elsewhere, let me bring you
up to date.

It goes like this. If you are in
the client software
business, then security has to be done in the endpoints and the network is just
dumb “plumbing,” or rather, it might as well be because you can’t assume
anything about it. If you sell appliances
that sit here and there in the network, the network sprouts two layers, with
the “plumbing” part separated from the “intelligence.” Makes sense, I guess. But
if you sell switches and routers then the intelligence must be integrated in with
the infrastructure. Now I get it. Or maybe I’m missing the point, what if you
sell both appliances
and infrastructure
?

I believe that we’re currently forced to deploy in defense in depth due to the shortcomings of solutions today.  I believe the "network" will not and cannot deliver all the security required.  I believe we’re going to have to invest more in secure operating systems and protocols.  I further believe that we need to be data-centric in our application of security.  I do not believe in single-point product "appliances" that are fundamentally functionally handicapped.  As a delivery mechanism to deliver security that matters across network I believe in this.

Again, the most important difference between what I believe and what Joseph points out above is that the normal class of "appliances" he’s trying to suggest I advocate simply aren’t what I advocate at all.  In fact, one might surprisingly confuse the solutions I do support as "infrastructure" — they look like high-powered switches with a virtualized blade architecture integrated into the solution.

It’s not an access switch, it’s not a single function appliance and it’s not a blade server and it doesn’t suffer from the closed proprietary single vendor’s version of the truth.  To answer the question, if you sell and expect to produce both secure appliances and infrastructure, one of them will come up short.   There are alternatives, however.

So why leave your endpoints,
the ones that have all those vulnerabilities that created the security industry
in the first place, to be hit on by bots, “guests,” and anyone else that wants
to? I don’t know about you, but I would want both something on the endpoint,
knowing it won’t be 100% but better than nothing, and also something in the
network to stop the nasty stuff, preferably before it even got in.

I have nothing to disagree with in the paragraph above — short of the example of mixing active network defense with admission/access control in the same sentence; I think that’s confusing two points.   Back to the religious debate as Joseph drops back to the "Nevis is going to replace all switches in the wiring closet" approach to security via network admission/access control:

Now, let’s talk about getting on
the network. If the switches are just dumb plumbing they will blindly let
anyone on, friend or foe, so you at least need to beef up the dumb plumbing
with admission enforcement points. And you want to put malware sensors where
they can be effective, ideally close to entry points, to minimize the risk of having
the network infrastructure taken down. So, where do you want to put the
intelligence, close to the entry enforcement points or someplace further in the
bowels of the network where the dumb plumbing might have plugged-and-played a
path around your expensive intelligent appliance?

That really depends upon what you’re trying to protect; the end point, the network or the resources connected to it.  Also, I won’t/can’t argue about wanting to apply access/filtering (sounds like IPS in the above example) controls closest to the client at the network layer.  Good design philosophy.   However, depending upon how segmented your network is, the types, value and criticality of the hosts in these virtual/physical domains, one may choose to isolate by zone or VLAN and not invest in yet another switch replacement at the access layer.

If the appliance is to be
effective, it has to sit at a choke point and really be and enforcement point.
And it has to have some smarts of its own. Like the secure switch that we make.

Again, that depends upon your definition of enforcement and applicability.  I’d agree that in flat networks, you’d like to do it at the port/host level, though replacing access switches to do so is usually not feasible in large networks given investments in switching architectures.  Typical fixed configuration appliances overlaid don’t scale, either.

Furthermore, depending upon your definition of what an enforcement zone and it’s corresponding diameter is (port, VLAN, IP Subnet) you may not care.  So putting that "appliance" in place may not be as foreboding as you wager, especially if it overlays across these boundaries satisfactorily.

We will see how long before these new-fangled switch vendors that used to be SSL VPN’s, that then became IPS appliances that have now "evolved" into NAC solutions, will become whatever the next buzzword/technology of tomorrow represents…especially now with Cisco’s revitalized technology refresh for "secure" access switches in the wiring closets.  Caymas, Array, and Vernier (amongst many) are perfect examples.

When it comes down to it, in the markets Crossbeam serves — and especially the largest enterprises — they are happy with their switches, they just want the best security choice on top of it provided in a consolidated, agile and scalable architecture to support it.

Amen.

/Hoff