Archive

Archive for January, 2008

Process Control Systems (SCADA and the like) & Virtualization

January 31st, 2008 3 comments

Securingscada
Just when you get out, they pull you back in…

Jason Holcomb over at the Digital Bond blog posted something that attracted my attention.  He popped up an innocuous entry titled "Virtualization in the SCADA World: Part 1" which intrigued me for reasons that should be obvious to anyone who reads my steaming pile of blogginess with any sort of regularity.

It would be easy to knee-jerk and simply roll my eyes suggesting that adding virtualization to the "security by obscurity" approach we’ve seen being argued recently is just inviting disaster (literally,) but I’m trying to be rational about this.  I want to understand the other camp’s position and learn from it, hopefully contributing to break down a wall or two…

Jason sets it up:

A few years back, the traditional IT
world was debating the merits of virtualization. There were concerns
about performance, security, vendor support, and a host of other
issues. Fast-forward to today, however, and you’ll find virtual
machines in use in nearly every data center.

I think it’s fair to say that while most folks would be hard pressed to dispute the merits of virtualization, the concerns regarding "…performance, security, vendor support, and a host of other issues" are hardly resolved.  In fact, they are escalating.

<snip>

So what are the implications of this in the SCADA world? I think
it’s just a matter of time before we see more widespread acceptance of
VMware and other virtualization platforms in production control
systems. The benefit here may be less about cost savings, though, and
more about increased functionality. The ability to snapshot and clone
machines for backup and testing, for example, is very attractive.

I think the paragraph above is extremely telling because it’s really focused on debating the value proposition which is really a foregone conclusion for all the reasons Jason mentions.   The real meat will hopefully be discussed in the follow-on’s:

We’re going to examine this subject over a series of blog posts.
Hopefully we’ll cover all the major topics – security, reliability,
performance, serial communication issues, vendor support, and adoption
rate, to name a few.

I look forward to your comments and opinions.

In my first comment to Jason’s posting, I alluded to a whole host of virtualization-related issues which are grounded in practice and not hype and asked that since SCADA security is billed as being SO much different than "IT security" what this intersection will bring and how one might assess risk (and against what.)

Further, given various C&A standards, I’m interested in how one might approach (depending upon industry) holding these systems up to a C&A process once virtualization is added to the mix. 

It will be an interesting discussion, methinks.

/Hoff

 
Categories: Virtualization Tags:

Mommy, Why Is There a Server In the House?

January 29th, 2008 6 comments

Mommyserver
Hat tip to Scott Lowe

This is an honest-to-[insert diety here] book.  You can check it out on Amazon.  You can also read the online version here.

Unfortunately this book hits a little too close to home.  Literally.

You see, there are currently two rackmount appliances, several switches and some laptops whirling away in my my wife’s sun room.  Last week they were accompanied by a couple of network security appliances, also.  I work out of my house lab, so I need stuff to hook to my 20Mb/s FIOS line to justify the expense (besides the UFC pay-per-views.)

Each of these global warmers has what must be several hundred cooling fans, various buzzing thingys and 40 power supplies amongst them.  They’re so neat to look at in the dark, casting eerie LED reflections onto the snow outside on my deck.  Yet I digress.

But what are they doing here, you ask?  Why are they in the Sun room?

That is exactly the question asked by my four year old.  Daddy calmly answered "Well because they sound like the combined output of a swarm of angry bees and a Sikorsky dual-rotor helicopter and I sure as hell don’t want them in my office."

Puzzled, she toddled off to watch Dora the Explorer downstairs in the family room where the only thing resembling a computer is the Verizon FIOS STB with DVR.  Ingrate.

I shall print this handy guide to edumacating my child-spores so that no longer shall I have to endure their petty little questions regarding the 20-node Beowulf cluster I’m building in the kitchen.

Anyone have the ISO for the latest DivorceOS?

/Hoff

Categories: Jackassery Tags:

I/O Virtualization: The Battle for the Datacenter OS and What This Means to Security

January 28th, 2008 3 comments

Datacenter
One of the very profound impacts virtualization will have on security is the resultant collateral damage caused by what I call the "battle for the datacenter OS" framed by vendors who would ordinarily not be thought of as "OS vendors." 

I call the main players in this space the "Three Kings:" Cisco, VMware and EMC.
Microsoft is in there also, but that’s a topic for another post as I bifurcate operating system vendors in the classical sense from datacenter infrastructure platforms.  Google deserves a nod, too.

The "Datacenter OS" I am speaking of is the abstracted amalgam of virtualization and converged networking/storage that delivers the connected and pooled resource equivalent of the utility power grid.   Nick Carr reflects in his book "The Big Switch":

A hundred years ago, companies stopped producing their own power with steam engines and generators and plugged into the newly built electric grid.”

The "datacenter" and its underlying "operating system," in whatever abstracted form they will manifest themselves, will become this service layer delivery "grid" to which all things will connect; services will be merely combinations of resources and capabilities which are provisioned dynamically.

We see this starting to take form with the innovation driven by virtualization, the driving forces of convergence, the re-emergence of grid computing, the architectures afforded by mash-ups and the movements and investments of the Three Kings in all of these areas.

It’s pretty clear that that these three vendors are actively responsible for shaping the future of computing as we know it.  However, It’s not at all clear to me how much of the strategic overlap between them is accidental or planned, but they’re all approaching the definition of how our virtualized computing experience will unfold in very similar ways, albeit from slightly different perspectives.

One of the really interesting examples of this is how virtualization and convergence are colliding to produce the new model of the datacenter which blurs the lines between computing, networking and storage.

Specifically, the industry — as driven by customers — is trending toward the following:

  • Upgrading from servers to blades
  • Moving from hosts and switches to clusters and fabrics
  • Evolving from hardware/software affinity to grid/utility computing
  • Transitioning from infrastructure to service layers in “the cloud”

The topic of this post is really about the second bullet, moving from the notion of the classical hosts/servers plugging into separate network and storage switches to instead clusters of resources connecting to fabric(s) such that what we end up with are pools of resources to be provisioned, allocated and dispatched where, when and how needed. 

This is where I/O virtualization enters the picture.  I/O virtualization at the macro level of the datacenter describes the technology which enables the transition from the discrete and directly-connected model of storage versus networking to a converged virtualized model wherein network and storage resources are aggregated into a single connection to the "fabric."

Instead of having separate Ethernet, fiber channel and Infiniband connections, you’d have a single pipe connected to a "virtual connectivity switch" that provides on-demand, dynamic and virtualized allocation of resources to anything connected to the fabric.  The notion of physical affinity from the server/host’s perspective goes away.

IovirtAndy Dornan from Information Week just did a nice write-up titled "Cisco pitches virtual switches for next-gen data centers." 

It’s obviously focused on Cisco’s Nexus 7000 Series switch, but also gives some coverage of Brocade’s DCX Backbone, Xsigo’s Director and 3Leaf’s v-8000 products.

Check out what Andy had to say about Cisco’s strategy:

Cisco’s vision is one in which big companies off-load an increasing
number of server tasks to network switches, with servers ultimately
becoming little more than virtual machines inside a switch.*

The Nexus
doesn’t deliver that, but it makes a start, aiming to virtualize the
network interface cards, host bus adapters, and cables that connect
servers to networks and remote storage. At present, those require
dedicated local area networks and storage area networks, with each
using a separate network interface card and host bus adapter for every
virtual server. The Nexus aims to consolidate them all into one (or
two, for redundancy), with virtual servers connecting through virtual
NICs.

This stuff isn’t vaporware anymore.  These products are real…from numerous entities.  These companies — and especially Cisco — are on a mission to re-write the datacenter blueprint and security along with it.  VMware’s leading the virtualization charge and Cisco’s investing for the long run.  When you look at their investment in VMware, the I/O virtualization play and what they’re doing with vFrame, it’s impressive — and scary at the same time.

Them’s a lot of eggs in one basket, and it’s perfectly clear that there is a huge sucking sound coming from the traditional security realm as we look out over the horizon.  How do you apply a static security sensibility grounded in the approaches of 20 years ago to an amorphous, fluid, distributed and entirely dynamic pooled set of resources and information?

Cisco has thrown their hat in the ring to address the convergence of role-based admission and access control with the announcement of TrustSec which will be available in the Nexus as it is in the higher-end Catalyst switches.  Other vendors such as HP, Extreme and now Juniper as well as up-starts like Nevis and Consentry have their perspectives.  What each of these infrastructure networking vendors have in store for how their solutions will play in the world of virtualized and distributed computing is still to unfold.

How might this emerging phase of technology, architecture, provisioning, management, deployment and virtualization of resources impact security especially since we’ve barely even started to embrace the impact server virtualization has?  One word:

Completely.

More on this topic shortly…

/Hoff

*Update: A colleague of mine from Unisys, Michael Salsburg, prompted me via discussion to clarify a point.  I think that  for at least the short term, the "server tasks" that will be offloaded to I/O virtualization solutions such as Cisco’s will be fairly narrow in scope and logically defined.  However, given that NX-OS is Linux based, one might expect to see a Hypervisor-like capability within the switch itself, enabling VM’s and applications to be run directly within it.

Certainly we can expect and intermediary technology derivation which would include Cisco developing their own virtual switch that complements/replaces the vSwitch present in the VMM today; at this point given the heft/performance of the Nexus, one could potentially see it existing "outside" the vHost and using a high-speed 10Gb/s connection, redirect all virtual network functions to the external switch…

Categories: Cisco, Virtualization, VMware Tags:

What a Shocker, Stiennon & I Disagree: Arbor + Ellacoya Make Total Sense…

January 25th, 2008 3 comments

Rucy
"Common sense has nothing to do with it. When I say he’s wrong, he’s wrong." — Ethel Mertz, I Love Lucy.

What a surprise, I disagree totally with Richard Stiennon on his assessment of the value proposition regarding the acquisition of Ellacoya by Arbor Networks

Specifically, I find it hysterical that Richard claims that Arbor is "abandoning the security space."  Just the opposite, I believe Arbor — given what they do — is pursuing a course of action that will allow them to not only continue to cement their value proposition in the security space, but extend it further, both in the carrier and enterprise space.

I think that it comes down to what Richard defines as "security" — a term I obviously despise for reasons just like this.

Here’s we we diverge:

I was actually in Ann Arbor last week when news broke that Arbor
Networks had acquired Ellacoya a so called “deep packet inspection”
technology vendor. I was perplexed. That’s not security.

"That’s not security." Funny.  See below.

First let me clear up some terminology.   “Deep Packet Inspection” was the term some Gartner analyst
popularized to describe what content filtering gateways do. They
inspect content for worms, attacks, and viruses. Somewhere along the
line the traffic shaping industry(Ellacoya, Allot, Sandvine) co-opted
the term to describe what their devices do: look at the packet header
to determine what protocol is being transported and throttle the
throughput based on protocol. In other words Quality of Service for
network traffic. These devices do not look at payloads at all except in
some rare instances when you have to determine if Skype-like programs
are spoofing different protocols.

Firstly, Richard conveniently trivialized DPI.  DPI is certainly about inspecting the packet (beyond the header, by the way) and determining what protocol and application that is being used with precision and fidelity.   In a carrier network, that’s used for provisioning, network allocation, bandwidth management and service level management.

These are terms every enterprise of worth is used to hearing and managing to!

Certainly a disposition once the packets are profiled could be to apply QoS which is often what one might do in DoS/DDoS situations, but there are multiple benefits of being able to apply policies and enact dispositions which are dependent on the use or misuse of a specific application or protocol.

In fact, if you don’t think that this is "security" why do we see QoS/Rate limiting in almost every firewall platform today — it may not show up in the GUI, but this is a fundamental way of dealing with attack.

Oh, by the way Richard, perhaps you ought to read your own product manuals as Fortinet provides QoS as a "security" function…perhaps not as robustly as Ellacoya…and soon Arbor:

FortiGate Traffic Shaping Technical Note

The FortiGate Traffic Shaping Technical Note, available on the Technical Documentation Web Site,
discusses Quality of Service (QoS) and traffic shaping, describes
FortiGate traffic shaping using the token bucket filter mechanism, and
provides general procedures and tips on how to configure traffic
shaping on FortiGate firewalls.

FortiOS v3.0 MR1introduced inbound traffic shaping per
interface. For any FortiGate interface you can use the following
command to configure inbound traffic shaping for that interface.
Inbound traffic shaping limits the bandwidth accepted by the interface.


config system interface
   edit port2
      set inbandwidth 50
   end
end

This command limits the inbound traffic that the port2 interface
accepts to 50 Kb/sec. You can set inbound traffic shaping for any
FortiGate interface and for more than one FortiGate interface. Setting inbandwidth to 0 (the default) means unlimited bandwidth or no traffic shaping.

Inbound traffic shaping limits the amount of traffic accepted by the
interface. This limiting occurs before the traffic is processed by the
FortiGate unit. Limiting inbound traffic takes precedence over traffic
shaping applied using firewall policies.
This means that traffic
shaping applied by firewall policies is applied to traffic already
limited by inbound traffic shaping.

Lot of uses of the word "firewall" in the context of "traffic shaping" in that description…Here’s a link to your knowledge base, just in case you don’t have it 😉

Secondly, since availability is often a function of security, as an administrator I’d want to be able to craft a "security policy" that allows me to make sure that the stuff that matters to me most gets through and is prioritized as such and the rest fight for scraps.  Doing this with precision and fidelity is incredibly important whether you’re a carrier or an enterprise.  Oh, wait, here’s some more Fortinet documentation that seems to contradict the "that’s not security" sentiment:

Traffic shaping which is applied to a Firewall Policy, is enforced for
traffic which may flow in either direction. Therefore a session which
may be setup by an internal host to an external one, via a
Internal->External policy, will have Traffic shaping applied even if
the data stream is then coming from external to internal. For example,
an FTP ‘get’ or a SMTP server connecting to an external one, in order
to retrieve email.

Remember CHKP’s FloodGate?  Never particularly worked out from an integration perspective, but a good idea, nonetheless.  Cisco’s got it.  Juniper’s got it…

Further, there’s a new company — you may have heard about it — that takes application specificity and applies granular policies on traffic that does just this sort of thing: Palo Alto Networks.  They call it their "next generation firewall."  Love or hate the title (I don’t particularly care for it) I call it common sense.  Are you going to tell me this isn’t security, either? 

The next, next-generation of security devices will extend this decision-making criteria from ports/protocols through application "conduits" and start making decisions on content in context.  This is the natural extension of DPI.

I won’t argue with the rest of Richard’s points about M&A risk and market expansion because he’s right in many of his examples, but that wasn’t the title of his post or the real sentiment.

I think that this deal enhances both the capabilities and applicability of Arbor’s solutions which have been largely stovepiped and pigeonholed in the DDoS category based upon what they do today.  I hope they can execute on the integration play.

As to the notion of ignoring the enterprise and "doubling down on the carrier market," Arbor has a great DDoS product for both markets; this allows them now to take advantage of the cresting consolidation activity in both and start diversifying their SECURITY offerings in a way that is intelligently roadmapped.

Who knows.  Perhaps they’ll re-market the combined products as a "multiservices security gateway" just like Fortinet does with their carrier products (here.)

I think your marketing slip is showing, Rich.

/Hoff

Categories: Application Security Tags:

Pushing Reset On the IT vs. SCADA Security Debate….

January 23rd, 2008 7 comments

Bigred
I think that perhaps I have chosen a poor approach in trying to raise awareness for process control and SCADA (in)security.  You can find recent SCADA posts here, including the "awareness campaign" Mogull and I launched a couple of weeks back that got a ton of eyeballs …

I believe I reacted poorly to the premise that some of those who assert expertise in this area tend to dismiss anyone who has a background only in what they define as "IT Security" as being unable to approach understanding — let alone securing — this technology.

Let me take a step back for a moment.

I’d like to get to the bottom of something regarding the alleged great divide between what is being described as diametrically opposed aptitude and experience required to secure "IT" infrastructure versus process control systems such as SCADA. 

I notice a similar divergence and statements being made between those who specialize in web application security (WebAppSec) versus information or network security (InfoSec/NetSec.)

For example, WebAppSec is a discipline and specialty that some suggest requires a level of experience and expertise that goes beyond that of traditional "information security" or "network security" practitioners.  It is suggested that in order to truly secure web applications, one generally requires programming experience, understanding complex data structures, databases, distributed application architecture, etc., at a very detailed level.

I think these statements are reasonable, but does it preclude an InfoSec/NetSec practitioner from contributing to effectively manage risk in a WebAppSec environment?

A network security practitioner can deploy a web application firewall and generally configure the solution, but the antagonists suggest that in order to provide a level of protection commensurate with the complexity and dynamics of the code which they are attempting to "secure," it cannot be done without an in-depth understanding of the application, it’s workflow and behavior.

Again, in reflection, I’d say that’s not an unreasonable assessment.  However, WebAppSec and NetSec/InfoSec guys in mature organizations generally should know what they don’t know and work together to implement a holistic solution across layers.  It doesn’t always work out that way, but in order to secure the WebApps, we can’t ignore the underpinnings of the network or information security foundations, either. 

It really should be a discussion, then, on how to unify complimentary approaches at various levels with an overall focus on managing risk. However, what I find is a downright civil war on the "IT" vs "SCADA" security front.  I have to ask why?

Here’s an excerpt from a post I found on Dale Peterson’s excellent Digital Bond blog.  It was a review of a SCADA security presentation at a CCC event in Italy regarding an introduction (of sorts) to SCADA security.  The premise isn’t really important, but I think that this does a good job of explaining some of the issues and sentiments that I am referring to:

Now here’s the good news: Asset owners, you don’t need to worry about
hackers. When they talk about “owning critical infrastructure”, they’re
just sharing their wildest dreams. In reality, they have nothing in
their hands. Zero. Nada. Niente. It will take several more years until
the hacker community has learned to master various flavours of PLCs
with their different protocols and vulnerabilities. It will take
further years until they get to things like OPC and furnish advanced
attack methods against it. And by the time they come up with decent
exploits for the various SCADA applications that we use today, most
CxOs will already be retired. We have heard over and over again that
the IT folks aren’t particularly good at securing SCADA environments.
Guess what, they aren’t good at attacking them either. However our
hackers do think nobody will notice because the stuff is all so
complex. That’s what I call “insecurity by obscurity”.

This whole notion of "it’s so complex and so few people know anything about it so we have nothing to fear" seems to be the point of divide.

There’s another really telling post on Dale’s site (authored by him) titled "Firewalls are easy, control systems are hard" wherein the following inaccurate premise is painted which reduces the scope of the entire infosec/netsec profession down to a five-tupule in a basic packet filtering firewall:

One of the common refrains heard again at ISA Expo is that IT
firewalls are too difficult to configure and deploy. Several
presenters, especially those promoting field security appliances,
mentioned this, and it seemed to be generally accepted. While I’m all
for simplicity and credit the vendors for trying to ease deployment,
firewalls are simple compared to the deploying PLC’s, defining points
in the SCADA database, developing displays, control loops, and the
myriad of other detailed configuration required to make a control
system work.

A firewall ruleset is as simple as defining rules by source IP,
destination IP and port. Since communication in control systems is
limited as compared to the corporate network, the ruleset is usually
very small.

How simple is that compared to monitoring and controlling a complex
process distributed over a plant or large part of the country with 5000
points or 100,000 points? I was introduced to control systems in 2000
and have worked on a large number of SCADA and DCS in a variety of
industry sectors and I still marvel at the effectiveness and attention
to detail in these systems. There is nothing in firewall or any other
IT security system configuration that comes close to the complexity in
configuring and deploying control systems

That maybe true in an SME/SMB network, but the reality is now that firewalls in a large enterprise (which is a much more reasonable comparison) are just a small piece of the puzzle.  Endpoints numbering in the thousands (if not tens of thousands) which run hundreds of application combinations aren’t exactly chopped liver to secure.  Add in Web Application Firewalls, Database Monitoring, Encryption (at rest, in motion,) IDS, IPS, Proxies, A/V, URL Filtering, Anti-spam, NBAD, SIEM, etc. and it just gets more complex from there.

If life were as simple as deploying a firewall and firing off a five-tuple ruleset, we wouldn’t be in this pickle.

Regardless of whether a NetSec/InfoSec practitioner knows in-depth details regarding implementing PLC’s/RTU’s or the inner-workings of the IEC61131-3 block programming language is neither here nor there because it’s only one piece of the puzzle. 

Many InfoSec/NetSec practitioners don’t have expertise in SQl/pSQL, but they work with the DBA’s to secure databases, right?

Once these systems are interconnected to an IP-enabled network, it requires cooperation up and down the stack.  InfoSec/NetSec pro’s need to become SCADA-aware and SCADA pro’s need to stop suggesting that this technology is just so complex and overwhelming that it’s beyond our ability to effectively collaborate and that "firewall jockeys" just can’t understand.

The reality is that the bad guys look for the weakest link in the chain.  Will they attack complex protocol stacks and programming languages first?  No, they’ll go after the low-hanging fruit like poorly-configured/secured end-nodes, bad perimeter controls and general user-driven crap like we see in the rest of the world.  They won’t need to even spell PLC.

We need the same level of information sharing and respective skill set cross-pollinization in this regard instead of squaring off like it’s a battle between us versus them. 

/Hoff

Categories: Uncategorized Tags:

Client Virtualization and NAC: The Fratto Strikes Back…

January 20th, 2008 5 comments

ChubbyvaderAttention NAC vendors who continue to barrage me via email/blog
postings claiming I don’t understand NAC:  You’re missing the point of
this post which basically confirms my point; you’re not paying
attention and are being myopic.
I included NAC with IPS in (the original) post here to illustrate two things:

(1) Current NAC solutions aren’t particularly relevant when you have centralized and virtualized client infrastructure and

(2) If you understand the issues with offline VM’s in the server world
and what it means to compliance and admission control on spin-up or
when VMotioned, you could add a lot of value by adapting your products
(if you’re software based) to do offline VM conformance/remediation and
help prevent VM sprawl and inadvertent non-compliant VM spin-up…

But you go ahead and continue with your strategy…you’re doing swell so far convincing the market of your relevance.

Now back to our regular programming…

— ORIGINAL POST —


I sense a disturbance in the force…

Mike Fratto’s blog over at the NWC NAC Immersion Center doesn’t provide a method of commenting, so I thought I’d respond to his post here regarding my latest rant on how virtualization will ultimately and profoundly impact the IPS and NAC appliance markets titled "How the hypervisor is death by a thousand cuts to the network IPS/NAC appliance vendors."

I think Mike took a bit of a left turn when analyzing my comments because he missed my point.  Assuming I’m wrong, I’ll respond the best I can.

A couple of things really stood out in Mike’s comments and I’m going to address them in reverse order.  I think most of Mike’s comments strike an odd chord to me because my post was about what is going to happen to the IPS/NAC markets given virtualization’s impact and not necessarily what these products look like today.

Even though the focus of my post was not client virtualization, let’s take this one first:

Maybe I am missing something, but client virtualization just doesn’t
seem to be in the cards today. Even if I am wrong, and I very well
could be, I don’t think mixing client VM’s with server VM in the same
hypervisor would be a good idea if for no other reason than the fact
that a client VM could take down the hypervisor or suck up resources.

I don’t say this to be disrespectful, but it doesn’t
appear like Mike understands how virtualization technology works.  I
can’t understand what he means when he speaks of "…mixing client VM’s
with server VM in the same hypervisor."  VM’s sit atop of the
hypervisor, not *in* it.   Perhaps he’s suggesting that despite isolation and the entire operating premise of virtualization that it’s a bad idea to have a virtualized client instance colocated in the same physical host as a VM next to a VM running a server instance?  Why?

Further, beyond theoretical hand wringing,
I’d very much like to see a demo today of how a "…client VM could take down the
hypervisor."

I won’t argue that client virtualization is still not as popular
as server virtualization today, but according to folks like Gartner, it’s on
the uptake, especially when it goes toward dealing with endpoint
management and the consumerization of IT.  With entire product lines from folks like Citrix (Desktop Server, Presentation Server products, XenDesktop) and VMware (VDI) it’s sort of a hard bite to swallow.

This is exactly the topic of
my post here (Thin Clients: Does this laptop make my assets look fat?), underscored with a quick example by Thomas Bittman from Gartner:

Virtualization on the PC has even more potential than server
virtualization to improve the management of IT infrastructure,
according to Mr Bittman.
“Virtualization on the client is perhaps two years behind, but it is
going to be much bigger. On the PC, it is about isolation and creating
a managed environment that the user can’t touch. This will help change
the paradigm of desktop computer management in organizations. It will
make the trend towards employee-owned notebooks more manageable,
flexible and secure.”

Today, I totally get that NAC is about edge deployment (access layer,) keeping the inadvertent client polluter from bringing something nasty onto the network, making sure endpoints are compliant to network policy, and in some cases, controlling access to network resources:

NAC is, by definition, targeting hosts at the edge. The idea is to
keep control access of untrusted or untrustworthy hosts to the network
based on some number of conditions like authentication, host
configuration, software, patch level, activity, etc. NAC is client
facing regardless of whether you’re controlling access at the client
edge or the data center edge.

I understand that today’s version of NAC isn’t about servers,
but the distinction between clients and servers blurs heavily due to
virtualization and NAC — much like IPS — is going to have to change
to address this.  In fact, some might argue it already has.  Further, some of the functionality being discussed when using the TPM is very much NAC-like.  Remember, given the dynamic nature of VMs (and technology like VMotion) the reality is that a VM could turn up anywhere on a network.  In fact, I can run (I do today, actually) a Windows "server" in a VM on my laptop:

You could deploy NAC to access by servers to the network, but I
don’t think that is a particularly useful or effective strategy mainly
because I would hope that your servers are better maintained and better
managed than desktops. Certainly, you aren’t going to have arbitrary
users accessing the server desktop and installing software, launching
applications, etc. The main threat to server is if they come under the
control of an attacker so you really need to make sure your apps and
app servers are hardened.

Within a virtualized environment (client and server) you won’t need a bunch of physical appliances or "NAC switches," as this functionality will be provided by a virtual appliance within a host or as a function of the trusted security subsystem embedded within the virtualization provider’s platform.

I think it’s a natural by-product of the productization of what we see as NAC platforms today, anyhow.  Most of the NAC solutions today used to be IPS products yesterday.  That’s why I grouped them together in this example.

This next paragraph almost makes my point entirely:

Client virtualization is better served with products like Citrix
MetaFrame or Microsoft’s Terminal Services where the desktop
configuration is dictated and controlled by IT and thus doesn’t t
suffer from the same problems that physical desktop do. Namely, in a
centrally managed remote client situation, the administrator can more
easily and effectively control the actions of a user and their
interactions on the remote desktop. Drivers that are being pushed by
NAC vendors and analysts, as well as responses to our own reader polls,
relating the host condition like patch level, running applications,
configuration, etc are more easily managed and should lead to a more
controlled environment.

Exactly!  Despite perhaps his choice of products, if the client environment is centralized and virtualized, why would I need NAC (as it exists today) in this environment!?  I wouldn’t.  That was the point of the post!

Perhaps I did a crappy job of explaining my point, or maybe if I hadn’t included NAC alongside IPS, Mike wouldn’t have made that left turn, but I maintain that IPS and NAC both face major changes in having to deal with the impact virtualization will bring.

/Hoff

 

A Shout Out to My Boy Grant Bourzikas…It’s How We Roll…

January 19th, 2008 2 comments

I was reading Jeremiah Grossman’s review of Fortify’s film "The New Face of Cybercrime" (watch the trailer here) and noted this little passage in his review:

Then in a bold move, Roger Thorton (CTO of Fortify) and director
Fredric Golding (with the 3 other panelists), opened things up to the
audience to comment and ask questions. Right when they did that I was
thinking to myself, OMG, these guys are crazy asking an infosec what
they thought! To their credit they were very patient and professional
in dealing with the many inane “constructive” criticisms voiced.

The
stand out of the panelists was Grant Bourzikas, CISO of Scottrade, who
was able to answer pointed question masterfully from “business”
interest perspective. Clearly he has been around the block once or
twice when it comes to web application security in the real world.

I was thrilled that Jeremiah pointed Grant out.  See, G. was one of my biggest enterprise customers at Crossbeam and I can tell you that he and the rest of the Scottrade security team know their stuff.  They have an incredible service architecture with one of the most robust security strategies you’ve seen in a business that lives and dies by the uptime SLAs they keep; availability is a function of security and Grant and his team do a phenomenal job maintaining both.

I can personally attest to the fact that he’s been around the block more than a couple of times 😉  It’s very, very cool to see someone like Jeremiah recognize someone like Grant — since I know both of them it’s a double-whammy for me because of how much respect I have for each of them.

Wow.  This got a little mushy, huh?  I guess I just miss him and his bobble-head doll (inside joke, sorry Evan.)

My only question is how did Grant manage to escape St. Louis?

/Hoff

CIA: Hackers to Blame for Power Outages (’nuff said)

January 18th, 2008 1 comment

Aurora
I’m sorry, did someone say we have nothing to worry about when it comes to SCADA and control systems security?  I must have missed the memo:

CIA: Hackers to Blame for Power Outages

WASHINGTON (AP) — Hackers literally turned out the lights in
multiple cities after breaking into electrical utilities and demanding
extortion payments before disrupting the power, a senior CIA analyst
told utility engineers at a trade conference.

All the break-ins
occurred outside the United States, said senior CIA analyst Tom
Donahue. The U.S. government believes some of the hackers had inside
knowledge to cause the outages. Donahue did not specify what countries
were affected, when the outages occurred or how long the outages
lasted. He said they happened in "several regions outside the United
States."

"In at least one case, the disruption caused a power
outage affecting multiple cities," Donahue said in a statement. "We do
not know who executed these attacks or why, but all involved intrusions
through the Internet."

A CIA spokesman Friday declined to provide additional details.

"The
information that could be shared in a public setting was shared," said
spokesman George Little. "These comments were simply designed to
highlight to the audience the challenges posed by potential cyber
intrusions."

Donahue spoke earlier this week at the Process
Control Security Summit in New Orleans, a gathering of engineers and
security managers for energy and water utilities.

The Bush
administration is increasingly worried about the little-understood
risks from hackers to the specialized electronic equipment that
operates power, water and chemical plants.

In a test last year,
the Homeland Security Department produced a video showing commands
quietly triggered by simulated hackers having such a violent reaction
that an enormous generator shudders as it flies apart and belches
black-and-white smoke.

The recorded demonstration, called the
"Aurora Generator Test," was conducted in March by government
researchers investigating a dangerous vulnerability in computers at
U.S. utility companies known as supervisory control and data
acquisition systems. The programming flaw was fixed, and equipment
makers urged utilities to take protective measures.

Now, this article says these attacks were outside the U.S. (since it came from the CIA, you can imagine why.)  Also, it does NOT directly say that SCADA systems were attacked.  However, these statements were made at a SCADA "Process Control" Security conference, so I’m going to take the liberty of bridging that assumption.  Either way, it highlights the problem at hand (see the 787 Dreamliner story and the Polish Tram derailment…)

Do y ou really think it’s that much of a reach to suggest it’s not happening on our shores?

If anyone gives me any more crap about being concerned regarding the possibility/potential for disruption…look at the boldfaced section.  The compromise was conducted over the Internet.  Don’t forget, this sort of thing is supposed to be impossible given some comments from my "awareness campaign":

Oh gosh, where do I begin Chris? 

What do the first letters of SCADA stand for?  Supervisory Control. 

A real SCADA system doesn’t issue direct controls. It issues
Supervisory Controls. There should be no time critical control loops in
SCADA. In other words, we have vulnerabilities. But they won’t destroy
anything right away. We engineers know better than to trust complex
software.

Most good design practice is based upon graceful degradation. In
other words, we don’t send a command to open a valve. We send commands
to change the pressure differential setpoint. A local controller takes
care of the rest. There are sanity checks in the local controller.

You could send commands to the field that would screw things up. But
most people would notice and we’d take action. Keep in mind, that while
our operation is very careful and deliberate, the distribution system
was built for some wild extremes including pipe breaks, extreme
weather, communcation outages, and vandalism. A successful attack would
require intimate knowledge of where the real vulnerabilities are.

Are you an expert at water utilities too? 

No, Jake.  I’m not a water utilities expert, just a concerned observer & citizen. 

Hat tip to Stiennon for the source.

/Hoff

Categories: Uncategorized Tags:

UPDATED: How the Hypervisor is Death By a Thousand Cuts to the Network IPS/NAC Appliance Vendors

January 18th, 2008 4 comments

Ipsnacdead_2Attention NAC vendors who continue to barrage me via email/blog postings claiming I don’t understand NAC:  You’re missing the point of this post which basically confirms my point; you’re not paying attention and are being myopic.

I included NAC with IPS in this post to illustrate two things:

(1) Current NAC solutions aren’t particularly relevant when you have centralized and virtualized client infrastructure and

(2) If you understand the issues with offline VM’s in the server world and what it means to compliance and admission control on spin-up or when VMotioned, you could add a lot of value by adapting your products (if you’re software based) to do offline VM conformance/remediation and help prevent VM sprawl and inadvertent non-compliant VM spin-up…

But you go ahead and continue with your strategy…you’re doing swell so far convincing the market of your relevance.

Now back to our regular programming…

— ORIGINAL POST —


From the "Out Of the Loop" Department…

Virtualization is causing IPS and NAC appliance vendors some real pain in the strategic planning department.  I’ve spoken to several product managers of IPS and NAC companies that are having to make some really tough bets regarding just what to do about the impact virtualization is having on their business.

They hmm and haw initially about how it’s not really an issue, but 2 beers later, we’re speaking the same language…

Trying to align architecture, technology and roadmaps to the emerging tidal wave of consolidation that virtualization brings can be really hard.  It’s hard to differentiate where the host starts and the network ends…

In reality, firewall vendors are in exactly the same spot.  Check out this Network World article titled "Options seen lacking in firewall virtual server protection."  In today’s world, it’s almost impossible to distinguish a "firewall" from an "IPS" from a "NAC" device to a new-fangled "highly adaptive access control" solution (thanks, Vernier Autonomic Networks…)

It’s especially hard for vendors whose IPS/NAC software is tied to specialty hardware, unless of course all you care about is enforcing at the "edge" — wherever that is, and that’s the point.  The demarcation of those security domain diameters has now shrunk.  Significantly, and not just for servers, either.  With the resurgence of thin clients and new VDI initiatives, where exactly is the client/server boundary?

Prior to virtualization, network-based IPS/NAC vendors would pick arterial network junctions and either use a tap/SPAN port in an out-of-band deployment or slap a box inline between the "trusted" and "untrusted" sides of the links and that was that.  You’d be able to protect assets based on port, VLAN or IP address.

You obviously only see what traverses those pipes.  If you look at the problem I described here back in August of last year, where much of the communication takes place as intra-VM sessions on the same physical host that never actually touch the externally "physical" network, you’ve lost precious visibility for detection let alone prevention.

I think by now everyone recognizes how server virtualization impacts network and security architecture and basically provides four methods (potentially in combination) today for deploying security solutions:

  1. Keep all your host based protections intact and continue to circle the wagons around the now virtualized endpoint by installing software in the actual VMs
  2. When available, take a security solution provider’s virtual appliance version of their product (if they have one) and install in on a host as a VM and configure the virtual networking within the vSwitch to provide the appropriate connectivity.
  3. Continue to deploy physical appliances between the hosts and the network
  4. Utilize a combination of host-based software and physical IPS/NAC hardware to provide off-load "switched" or "cut-through" policy enforcement between the two.

Each of these options has its pros and cons for both the vendor and the customer; trade-offs in manageability, cost, performance, coverage, scalability and resilience can be ugly.  Those that have both endpoint and network-based solutions are in a far more flexible place than those that do not.

Many vendors who have only physical appliance offerings are basically stuck adding 10Gb/s Ethernet connections to their boxes as they wait impatiently for options 5, 6 and 7 so they can "plug back in":

5.  Virtualization vendors will natively embed more security functionality within the hypervisor and continue integrating with trusted platform models

6.  Virtualization vendors will allow third parties to substitute their own vSwitches as a function of the hypervisor

7. Virtualization vendors will allow security vendors to utilize a "plug-in" methodology and interact directly with the VMM via API

These options would allow both endpoint software installed in the virtual machines as well as external devices to interact directly with the hypervisor with full purview of inter and intra-VM flows and not merely exist as a "bolted-on" function that lacks visibility and best-of-breed functionality.

While we’re on the topic of adding 10Gb/s connectivity, it’s important to note that having 10Gb/s appliances isn’t always about how many Gb/s of IPS
traffic you can handle, but also about consolidating what would
otherwise be potentially dozens of trunked LACP 1Gb/s Ethernet and FC connections pouring
out of each host to manage both the aggregate bandwidth but also the issues driven by a segmented network.

So to get the coverage across a segmented network today, vendors are shipping their appliances with tons of ports — not necessarily because they want to replace access switches, but rather to enable coverage and penetration.

On the other hand, most of the pure-play software vendors today who say they are "virtualization enabled" really mean that their product installs as a virtual appliance on a VM on a host.  The exposure these solutions have to traffic is entirely dependent upon how the vSwitches are configured.

…and it’s going to get even more hairy as the battle for the architecture of the DatacenterOS also rages.  The uptake of 10Gb/s Ethernet is also contributing to the mix as we see
customers:

  • Upgrading from servers to blades
  • Moving from hosts and switches to clusters and fabrics
  • Evolving from hardware/software affinity to gird/utility computing
  • Transitioning from infrastructure to service layers in “the cloud”

Have you asked your IPS and NAC vendors who are hardware-bound how they plan to deal with this Tsunami on their roadmaps within the next 12 months.  If not, grab a lifejacket.

/Hoff

UPDATE:  It appears nobody uses trackbacks anymore, so I’m resorting to activity logs, Google alerts and stubbornness to tell when someone’s referencing my posts.  Here are some interesting references to this post:

…also, this is right on the money:

I think I’ll respond to them on my blog with a comment on theirs pointing back over…

Come to Boston’s Own (New) Security Conference in March 2008 – Source Boston

January 16th, 2008 1 comment

Sourceboston
Besides the monthly BeanSec! gatherings, New England really needs a security conference to call its own.  Now we have one.

You can find a ton of detail about the show here, but if you’re impatient because you’re pahking the cah in the yahd and can’t get to your browsa, here’s the skinny:

The security convention is called SOURCE: Boston 2008, and it’s held
from March 12-14th, the W-F before St. Patrick’s Day weekend. The place
it’s being held is the Hyatt Regency Cambridge, right on MIT’s campus.

It’s the big step-shaped hotel with the neon framing right on the water. We have negotiated low room rates and are sporting quite a line-up of speakers and keynotes, including keynotes from Dan Geer of MIT Athena/Kerberos fame, Richard Clarke, and Steven Levy.

We will also have a panel with the members of the L0pht – speaking together for the first time in 10 years. We have some great evening activities such as a VIP reception and a Thirsty Thursday Pub Crawl.

The three tracks are application security, business and security, and new security technologies. It’s a professional conference and we’re having several CEOs speak, as well as other chief officers. However, it’s combining that professionalism and business component with the edginess and fun of some of the hacker conferences.

Rich Mogull and I are appearing on stage together as Click and Clack (or is it Wallace & Grommet?)  That ought to be worth the price of admission right there.

See you there.

/Hoff

Categories: Uncategorized Tags: