Archive

Archive for the ‘Governance, Risk Management & Compliance (GRC)’ Category

Intel TPM: The Root Of Trust…Is Made In China

February 22nd, 2013 8 comments

This is deliciously ironic.

Intel‘s implementation of the TCG-driven TPM — the Trusted Platform Module — often described as a hardware root of trust, is essentially a cryptographic processor that allows for the storage (and retrieval) and attestation of keys.  There are all sorts of uses for this technology, including things I’ve written of and spoken about many times prior.  Here’s a couple of good links:

But here’s something that ought to make you chuckle, especially in light of current news and a renewed focus on supply chain management relative to security and trust.

The Intel TPM implementation that is used by many PC manufacturers, the same one that plays a large role in Intel’s TXT and Mt. Wilson Attestation platform, is apparently…wait for it…manufactured in…wait for it…China.

<thud>

I wonder how NIST feels about that?  ASSurance.

Intel_TPMROFLCoptr.  Hey, at least it’s lead-free. o_O

Talk amongst yourselves.

/Hoff

 

 

Enhanced by Zemanta

Bridging the Gap Between Devs & Security – A Collaborative Suggestion…

May 23rd, 2012 3 comments

After my keynote at Gluecon (Shit My Cloud Evangelist Says…Just Not To My CSO,) I was asked by an attendee what things he could do within his organization to repair the damage and/or mistrust between developers and security organizations in enterprises.

Here’s what I suggested based on past experience:

  1. Reach out and have a bunch of “brown bag lunches” wherein you host-swap each week; devs and security folks present to one another with relevant, interesting or new solutions in their respective areas
  2. Pick a project that takes a yet-to-be-solved interesting business challenge that isn’t necessarily on the high priority project list and bring the dev and security teams together as if it were an actual engagement.

Option 1 starts the flow of information.  Option 2 treats the project as if it were high priority but allows security and dev to work together to talk about platform choices, management, security, etc. and because it’s not mission critical, mistakes can be made and learned from…together.

For example, pick something like building a new app service that uses node.js and MongoDB and figure out how to build, deploy and secure it…as if you were going to deploy to public cloud from day one (and maybe you will.)

You’ll be amazed to see the trust it builds, especially in light of developers enrolling security in their problem and letting them participate from the start versus being the speed bump later.

10 minutes later it’ll be a DevOps love-fest. 😉

/Hoff

 

Enhanced by Zemanta

From the Concrete To The Hypervisor: Compliance and IaaS/PaaS Cloud – A Shared Responsibility

December 6th, 2010 No comments

* Update:  A few hours after writing this last night, AWS announced they had achieved Level 1 PCI DSS Compliance.* If you pay attention to how the announcement is worded, you’ll find a reasonable treatment of what PCI compliance means to an IaaS cloud provider – it’s actually the first time I’ve seen this honestly described:

Merchants and other service providers can now run their applications on AWS PCI-compliant technology infrastructure to store, process and transmit credit card information in the cloud. Customers can use AWS cloud infrastructure, which has been validated at the highest level (Level 1) of PCI compliance, to build their cardholder environment and achieve PCI certification for their applications.

Note how they phrased this, then read my original post below.

However, pay no attention to the fact that they chose to make this announcement on Pearl Harbor Day 😉

Here’s the thing…

A cloud provider can achieve compliance (such as PCI — yes v2.0 even) such that the in-scope elements of that provider which are audited and assessed can ultimately contribute to the compliance of a customer operating atop that environment.  We’ve seen a number of providers assert compliance across many fronts, but they marketed their way into a yellow card by over-reaching…

It should be clear already, but for a service to be considered compliant, it clearly means that the customer’s in-scope elements running atop a cloud provider must also undergo and achieve compliance.

That means compliance is elementally additive the same way “security” is when someone else has direct operational control over elements in the stack you don’t.

In the case of an IaaS cloud provider who may achieve compliance from the “concrete to the hypervisor,” (let’s use PCI again,) the customer in turn must have the contents of the virtual machine (OS, Applications, operations, controls, etc.) independently assessed and meet PCI compliance in order that the entire stack of in-scope elements can be described as compliant.

Thus security — and more specifically compliance — in IaaS (and PaaS) is a shared responsibility.

I’ve spent many a blog battling marketing dragons from cloud providers that assert or imply that by only using said provider’s network which has undergone and passed one or more audits against a compliance framework, that any of its customers magically inherit certification by default. I trust this is recognized as completely false.

As compliance frameworks catch up to the unique use-cases that multi-tenancy and technologies such as virtualization bring, we’ll see more “compliant cloud” offerings spring up, easing customer pain related to the underlying moving parts.  This is, for example, what FedRAMP is aiming to provide with “pre-approved” cloud offerings.  We’ve got visibility and transparency issues to solve , as well as temporal issues such as the frequency and period of compliance audits, but there’s progress.

We’re going to see more and more of this as infrastructure- and platform-as-a-service vendors look to mutually accelerate compliance to achieve that which software-as-a-service can more organically deliver as a function of stack control.

/Hoff

* Note: It’s still a little unclear to me how some of the PCI requirements are met in an environment like an IaaS Cloud provider where “applications” that we typically think of that traffic in PCI in-scope data don’t exist (but the infrastructure does,) but I would assume that AWS leverages other certifications such as SAS and ISO as a cumulative to petition the QSA for consideration during certification.  I’ll ask this question of AWS and see what I get back.

Enhanced by Zemanta

Incomplete Thought: Compliance – The Autotune Of The Security Industry

November 20th, 2010 3 comments
LOS ANGELES, CA - JANUARY 31:  Rapper T-Pain p...
Image by Getty Images via @daylife

I don’t know if you’ve noticed, but lately the ability to carry a tune while singing is optional.

Thanks to Cher and T-Pain, the rampant use of the Autotune in the music industry has enabled pretty much anyone to record a song and make it sound like they can sing (from the Autotune of encyclopedias, Wikipedia):

Auto-Tune uses a phase vocoder to correct pitch in vocal and instrumental performances. It is used to disguise off-key inaccuracies and mistakes, and has allowed singers to perform perfectly tuned vocal tracks without the need of singing in tune. While its main purpose is to slightly bend sung pitches to the nearest true semitone (to the exact pitch of the nearest tone in traditional equal temperament), Auto-Tune can be used as an effect to distort the human voice when pitch is raised/lowered significantly.[3]

A similar “innovation” has happened to the security industry.  Instead of having to actually craft and execute a well-tuned security program which focuses on managing risk in harmony with the business, we’ve simply learned to hum a little, add a couple of splashy effects and let the compliance Autotune do it’s thing.

It doesn’t matter that we’re off-key.  It doesn’t matter that we’re not in tune.  It doesn’t matter that we hide mistakes.

All that matters is that auditors can sing along, repeating the chorus and ensure that we hit the Top 40.

/Hoff

Enhanced by Zemanta

FedRAMP. My First Impression? We’re Gonna Need A Bigger Boat…

November 3rd, 2010 3 comments

I’m grumpy, confused and scared.  Classic signs of shock.  I can only describe what I’m feeling by virtue of an analog…

There’s a scene in the movie Jaws where Chief Brody, chumming with fish guts to attract and kill the giant shark from the back of the boat called “The Orca,” meets said fish for the first time.  Terrified by it’s menacing size, he informs [Captain] Quint “You’re gonna need a bigger boat.”

I felt like that today as I read through the recently released draft of the long-anticipated FedRAMP documents.  I saw the menace briefly surface, grin at me, and silently slip back into the deep.  Sadly, channeling Brody, I whispered to myself “…we’re gonna need something much sturdier to land this fish we call cloud.”

I’m not going to make any friends with this blog.

I can barely get my arms around all of the issues I have.  There will be sequels, just like with Jaws, though unlike Roy Schneider, I will continue to be as handsome as ever.

Here’s what I do know…it’s 81 pages long and despite my unhappiness with the content and organization, per Vivek Kundra’s introduction, I can say that it will certainly “encourage robust debate on the best path forward.”  Be careful what you ask for, you might just get it…

What I expected isn’t what was delivered in this document. Perhaps in the back of my mind it’s exactly what I expected, it’s just not what I wanted.

This is clearly a workstream product crafted by committee and watered down in the process.  Unlike the shark in Jaws, it’s missing it’s teeth, but it’s just as frightening because its heft is scary enough.  Even though all I can see is the dorsal fin cresting the water’s surface,  it’s enough to make me run for the shore.

As I read though the draft, I was struck by a wave of overwhelming disappointment.  This reads like nothing more than a document which scrapes together other existing legacy risk assessment, vulnerability management, monitoring and reporting frameworks and loosely defines interactions between various parties to arrive at a certification which I find hard to believe isn’t simply a way for audit companies to make more money and service providers to get rubber-stamped service ATO’s without much in the way of improved security or compliance.

This isn’t bettering security, compliance, governance or being innovative.  It’s not solving problems at a mass scale through automation or using new and better-suited mousetraps to do it.  It’s gluing stuff we already have together in an attempt to make people feel better about a hugely disruptive technical, cultural, economic and organizational shift.  This isn’t Gov2.0 at all.  It’s Gov1.0 with a patch.  It’s certainly not Cloud.

Besides the Center for Internet Security reference, there’s no mention of frameworks, tools, or organizations outside of government at all…that explains the myopic focus of “what we have” versus “what we need.”

The document is organized into three chapters:

Chapter 1: Cloud Computing Security Requirement Baseline
This chapter presents a list of baseline security controls for Low and Moderate
impact Cloud systems. NIST Special Publication 800-53R3 provided the foundation
for the development of these security controls.

Chapter 2: Continuous Monitoring
This chapter describes the process under which authorized cloud computing systems
will be monitored. This section defines continuous monitoring deliverables,
reporting frequency and responsibility for cloud service provider compliance with
FISMA.

Chapter 3: Potential Assessment & Authorization Approach
This chapter describes the proposed operational approach for A&A’s for cloud
computing systems. This reflects upon all aspects of an authorization (including
sponsorship, leveraging, maintenance and continuous monitoring), a joint
authorization process, and roles and responsibilities for Federal agencies and Cloud
Service Providers in accordance with the Risk Management Framework detailed in
NIST Special Publication 800-37R1.

It’s clear that the document was written almost exclusively from the perspective of farming out services to Public cloud providers capable of meeting FIPS 199 Low/Moderate requirements.  It appears to be written in the beginning from the perspective of SaaS services and the scoping and definition of cloud isn’t framed — so it’s really difficult to understand what sort of ‘cloud’ services are in scope.  NIST’s own cloud models aren’t presented.  Beyond Public SaaS services, it’s hard to understand whether Private, Hybrid, and Community clouds — PaaS or IaaS — were considered.

It’s like reading an article in Wired about the Administration’s love affair with Google while the realities of security and compliance are cloudwashed over.

I found the additional requirements and guidance related to the NIST 800-53-aligned control objectives to be hit or miss and some of them utterly laughable (such as SC-7 – Boundary Protection: “Requirement: The service provider and service consumer ensure that federal information (other than unrestricted information) being transmitted from federal government entities to external entities using information systems providing cloud services is inspected by TIC processes.”  Good luck with that.  Sections on backup are equally funny.

The “Continuous Monitoring” section requirements wherein the deliverable frequency and responsibile party is laid out engenders a response from “The Princess Bride:”

You keep using that word (continuous)…I do not think it means what you think it means…

Only 2 of the 14 categories are those which FedRAMP is required to provide (pentesting and IV&V of controls.)  All others are the responsibility of the provider.

Sigh.

There’s also not a clear distinction that in a service deployed on IaaS (as an example) where anything in the workload’s VM fits into this scheme (you know…all the really important stuff like information and applications) and how agency processes intersect with the CSP, FedRAMP and the  JAB.

The very dynamism and agility of cloud are swept under the rug, especially in sections discussing change control.  It’s almost laughable…code changes in some “cloud” SaaS vendors every few hours.  The rigid and obtuse classification of the severity of changes is absolutely ludicrous.

I’m unclear if the folks responsible for some of this document have ever used cloud based services, frankly.

“Is there anything good in the document,” you might ask?  Yes, yes there is. Firstly, it exists and frames the topic for discussion.  We’ll go from there.

However, I’m at a loss as how to deliver useful and meaningful commentary back to this team using the methodology they’ve constructed…there’s just so much wrong here.

I’ll do my best to hook up with folks at the NIST Cloud Workshop tomorrow and try, however if I smell anything remotely like seafood, I’m outa there.

/Hoff

Related articles

Enhanced by Zemanta

Hey, Uh, Someone Just Powered Off Our Firewall Virtual Appliance…

June 11th, 2009 11 comments

onoffswitchI’ve covered this before in more complex terms, but I thought I’d reintroduce the topic due to a very relevant discussion I just had recently (*cough cough*)

So here’s an interesting scenario in virtualized and/or Cloud environments that make use of virtual appliances to provide security capabilities*:

Since virtual appliances (VAs) are just virtual machines (VMs) what happens when a SysAdmin spins down or moves one that happens to be your shiny new firewall protecting your production VMs behind it, accidentally or maliciously?  Brings new meaning to the phrase “failing closed.”

Without getting into the vagaries of vendor specific mobility-enabled/enabling technologies, one of the issues with VMs/VAs is that there’s not really a good way of designating one as being “more important” or functionally differentiated such as “security” or “critical application” that would otherwise ensure a higher priority for service availability (read: don’t spin this down unless…) or provide a topological dependency hierarchy in virtualized network constructs.

Unlike physical environments where system administrators (servers) are segregated from access to network and security appliances, this isn’t the case in virtual environments. In Cloud environments (especially public, multi-tenant) where we are often reliant only upon virtual security capabilities since we have no option for physical alternatives, this is an interesting corner case.

We’ve talked a lot about visibility, audit and policy management in virtual environments and this is a poignant example.

/Hoff

*Despite the silly notion that the Google dudes tried to suggest I equated virtualization with Cloud as one-in-the-same, I don’t.

Incomplete Thought: Storage In the Cloud: Winds From the ATMOS(fear)

May 18th, 2009 1 comment

I never metadata I didn’t like…

I first heard about EMC’s ATMOS Cloud-optimized storage “product” months ago:

EMC Atmos is a multi-petabyte offering for information storage and distribution. If you are looking to build cloud storage, Atmos is the ideal offering, combining massive scalability with automated data placement to help you efficiently deliver content and information services anywhere in the world.

I had lunch with Dave Graham (@davegraham) from EMC a ways back and while he was tight-lipped, we discussed ATMOS in lofty, architectural terms.  I came away from our discussion with the notion that ATMOS was more of a platform and less of a product with a focus on managing not only stores of data, but also the context, metadata and policies surrounding it.  ATMOS tasted like a service provider play with a nod to very large enterprises who were looking to seriously trod down the path of consolidated and intelligent storage services.

I was really intrigued with the concept of ATMOS, especially when I learned that at least one of the people who works on the team developing it also contributed to the UC Berkeley project called OceanStore from 2005:

OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers.

Any computer can join the infrastructure, contributing storage or providing local user access in exchange for economic compensation. Users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. The providers automatically buy and sell capacity and coverage among themselves, transparently to the users. The utility model thus combines the resources from federated systems to provide a quality of service higher than that achievable by any single company.

OceanStore caches data promiscuously; any server may create a local replica of any data object. These local replicas provide faster access and robustness to network partitions. They also reduce network congestion by localizing access traffic.

Pretty cool stuff, right?  This just goes to show that plenty of smart people have been working on “Cloud Computing” for quite some time.

Ah, the ‘Storage Cloud.’

Now, while we’ve heard of and seen storage-as-a-service in many forms, including the Cloud, today I saw a really interesting article titled “EMC, AT&T open up Atmos-based cloud storage service:”

EMC Corp.’s Atmos object-based storage system is the basis for two cloud computing services launched today at EMC World 2009 — EMC Atmos onLine and AT&T’s Synaptic Storage as a Service.
EMC’s service coincides with a new feature within the Atmos Web services API that lets organizations with Atmos systems already on-premise “federate” data – move it across data storage clouds. In this case, they’ll be able to move data from their on-premise Atmos to an external Atmos computing cloud.

Boston’s Beth Israel Deaconess Medical Center is evaluating Atmos for its next-generation storage infrastructure, and storage architect Michael Passe said he plans to test the new federation capability.

Organizations without an internal Atmos system can also send data to Atmos onLine by writing applications to its APIs. This is different than commercial graphical user interface services such as EMC’s Mozy cloud computing backup service. “There is an API requirement, but we’re already seeing people doing integration” of new Web offerings for end users such as cloud computing backup and iSCSI connectivity, according to Mike Feinberg, senior vice president of the EMC Cloud Infrastructure Group. Data-loss prevention products from RSA, the security division of EMC, can also be used with Atmos to proactively identify confidential data such as social security numbers and keep them from being sent outside the user’s firewall.

AT&T is adding Synaptic Storage as a Service to its hosted networking and security offerings, claiming to overcome the data security worries many conservative storage customers have about storing data at a third-party data center.

The federation of data across storage clouds using API’s? Information cross-pollenization and collaboration? Heavy, man.

Take plays like Cisco’s UCS with VMware’s virtualization and stir in VN-Tag with DLP/ERM solutions and sit it on top of ATMOS…from an architecture perspective, you’ve got an amazing platform for service delivery that allows for some slick application of policy that is information centric.  Sure, getting this all to stick will take time, but these are issues we’re grappling with in our discussions related to portability of applications and information.

Settling Back Down to Earth

This brings up a really important set of discussions that I keep harping on as the cold winds of reality start to blow.

From a security perspective, storage is the moose on the table that nobody talks about.  In virtualized environments we’re interconnecting all our hosts to islands of centralized SANs and NAS.  We’re converging our data and storage networks via CNAs and unified fabrics.

In multi-tenant Cloud environments all our data ends up being stored similarly with the trust that segregation and security are appropriately applied.  Ever wonder how storage architectures never designed to do these sorts of things at scale can actually do so securely? Whose responsibility is it to manage the security of these critical centerpieces of our evolving “centers of data.”

So besides my advice that security folks need to run out and get their CCIE certs, perhaps you ought to sign up for a storage security class, too.  You can also start by reading this excellent book by Himanshu Dwivedi titled “Securing Storage.”

What are YOU doing about securing storage in your enterprise our Cloud engagements?  If your answer is LUN masking, here’s four Excedrin, call me after the breach.

/Hoff

Security Will Not End Up In the Network…

June 3rd, 2008 9 comments

Secdeadend
It’s not the destination, it’s the journey, stupid.

You can’t go a day without reading from the peanut gallery that it is
"…inevitable that network security will eventually be subsumed into
the network fabric."  I’m not picking on Rothman specifically, but he’s been banging this drum loudly of late.

For such a far-reaching, profound and prophetic statement, claims like these are strangely myopic and inaccurate..and then they’re exactly right.

Confused?

Firstly, it’s sort of silly and obvious to trumpet that "network security" will end up in the "network."  Duh.  What’s really meant is that "information security" will end up in the network, but that’s sort of goofy, too. You’ll even hear that "host-based security" will end up in the network…so let’s just say that what’s being angled at here is that security will end up in the network.

These statements are often framed within a temporal bracket
that simply ignores the bigger picture and reads like a eulogy.  The reality is that historically
we have come to accept that security and technology are
cyclic and yet we continue to witness these terminal predictions defining an end state for security that has never arrived and never will.


Let me make plain my point: there is no final resting place for where and how security will "end up."

I’m visual, so let’s reference a very basic representation of my point.  This graph represents the cyclic transition over time of where and how
we invest in security.

We ultimately transition between host-based security,
information-centric security and network security over time. 

We do this little
shuffle based upon the effectiveness and maturity of technology,
economics, cultural, societal and regulatory issues and the effects of disruptive innovation.  In reality, this
isn’t a smooth sine wave at all, it’s actually more a classic dampened
oscillation ala the punctuated equilibrium theory I’ve spoken about
before
, but it’s easier to visualize this way.

Youarehere_3

Our investment strategy and where security is seen as being "positioned" reverses direction over time and continues ad infinitum.  This has proven itself time and time again yet we continue to be wowed by the prophetic utterances of people who on the one hand talk about these never-ending cycles and yet on the other pretend they don’t exist by claiming the "death" of one approach over another. 
 

Why?

To answer that let’s take a look at how the cyclic pendulum effect of our focus on
security trends from the host to the information to the network and
back again by analyzing the graph above. 

  1. If we take a look at the arbitrary "starting" point indicated by the "You Are Here" dot on the sine wave above, I suggest that over the last 2-3 years or so we’ve actually headed away from the network as the source of all things security.   

    There are lots of reasons for this; economic, ideological, technological, regulatory and cultural.  If you want to learn more about this, check out my posts on how disruptive Innovation fuels strategic transience.

    In short, the network has not been able to (and never will) deliver the efficacy, capabilities or
    cost-effectiveness desired to secure us from evil, so instead we look at
    actually securing the information itself.  The security industry messaging of late is certainly bearing testimony to that fact.  Check out this year’s RSA conference…
     

  2. As we focus then on information centricity, we see the resurgence of ERM, governance and compliance come into focus.  As policies proliferate, we realize that this is really hard and we don’t have effective and ubiquitous data
    classification, policy affinity and heterogeneous enforcement capabilities.  We shake our heads at the ineffectiveness of the technology we have and hear the cries of pundits everywhere that we need to focus on the things that really matter…

    In order to ensure that we effectively classify data at the point of creation, we recognize that we can’t do this automagically and we don’t have standardized schemas or metadata across structured and unstructured data, so we’ll look at each other, scratch our heads and conclude that the applications and operating systems need modification to force fit policy, classification and enforcement.

    Rot roh.
     

  3. Now that we have the concept of policies and classification, we need the teeth to ensure it, so we start to overlay emerging technology solutions on the host in applications and via the OS’s that are unfortunately non-transparent and affect the users and their ability to get their work done.  This becomes labeled as a speed bump and we grapple with how to make this less impacting on the business since security has now slowed things down and we still have breaches because users have found creative ways of bypassing technology constraints in the name of agility and efficiency…
     
  4. At this point, the network catches up in its ability to process closer to "line
    speed," and some of the data classification functionality from the host commoditizes into the "network" — which by then is as much in the form of appliances as it is routers and switches — and always
    will be.   So as we round this upturn focusing again on being "information centric," with the help of technology, we seek to use our network investment to offset impact on our users.
     
  5. Ultimately, we get the latest round of "next generation" network solutions which promise to deliver us from our woes, but as we "pass go and collect $200" we realize we’re really at the same point we were at point #1.

‘Round and ’round we go.

So, there’s no end state.  It’s a continuum.  The budget and operational elements of who "owns" security and where it’s implemented simply follow the same curve.  Throw in disruptive innovation such as virtualization, and the entire concept of the "host" and the "network" morphs and we simply realize that it’s a shift in period on the same graph.

So all this pontification that it is "…inevitable that network security will eventually be subsumed into
the network fabric" is only as accurate as what phase of the graph you reckon you’re on.  Depending upon how many periods you’ve experienced, it’s easy to see how some who have not seen these changes come and go could be fooled into not being able to see the forest for the trees.

Here’s the reality we actually already know and should not come to you as a surprise if you’ve been reading my blog: we will always need a blended investment in technology, people and process in order to manage our risk effectively.  From a technology perspective, some of this will take the form of controls embedded in the information itself, some will come from the OS and applications and some will come from the network.

Anyone who tells you differently has something to sell you or simply needs a towel for the back of his or her ears…

/Hoff

Asset Focused, Not Auditor Focused

May 3rd, 2008 5 comments

Grcsoup
Gunnar Peterson wrote a great piece the other day on the latest productization craze in InfoSec – GRC (Governance, Risk Management and Compliance) wherein he asks "GRC – To Be or To Do?"

I don’t really recall when or from whence GRC sprung up as an allegedly legitimate offering, but to me it seems like a fashionably over-sized rug under which the existing failures of companies to effectively execute on the individual G, R, and C initiatives are conveniently being swept.

I suppose the logic goes something like this: "If you cant effectively
govern, manage risk or measure compliance it must be because what you’re doing is fragmented and siloed.  What you need is
a product/framework/methodology that takes potentially digestible
deliverables and perspectives and "evolves" them into a behemoth suite instead?"

I do not dispute that throughout most enterprises, the definitions, approaches and processes in managing each function are largely siloed and fragmented and I see the attractiveness of integrating and standardizing them, but  I am unconvinced that re-badging a control and policy framework collection constitutes a radical new approach. 

GRC appears to be a way to sell more products and services under a fancy new name to address problems rather than evaluate and potentially change the way in which we solve them.  Look at who’s pushing this: large software companies and consultants as well as analysts looking to pin their research to something more meaningful.

From a first blush, GRC isn’t really about governance or managing risk.  It’s audit-driven compliance all tarted up.

It’s a more fashionable way of getting all your various framework and control definitions in one place and appealing to an auditor’s desire for centralized "stuff" in order to document the effectiveness of controls and track findings against some benchmark.  I’m not really sure where the business-driven focus comes into play?

It’s also sold as a more efficient way of reducing the scope and costs of manual process controls.  Fine.  Can’t argue with that.  I might even say it’s helpful, but at what cost?

Gunnar said:

GRC (or Governance, Risk Management, and Compliance for
the uninitiated) is all the rage, but I have to say I think that again
Infosec has the wrong focus.

Instead of Risk Management helping to deliver transparent Governance and as a natural by-product demonstrate compliance as a function of the former, the model’s up-ended with compliance driving the inputs and being mislabeled.

As I think about it, I’m not sure GRC would be something a typical InfoSec function would purchase or use unless forced which is part of the problem.  I see internal audit driving the adoption which given today’s pressures (especially in public companies) would first start in establishing gaps against regulatory compliance.

If the InfoSec function is considering an approach that drives protecting the things that matter most and managing risk to an acceptable level and one that is not compliance-driven but rather built upon a business and asset-driven approach, rather than make a left turn Gunnar suggested:

Personally, I am happy sticking to classic infosec knitting – delivering confidentiality, integrity, and availability through authentication, authorization, and auditing. But if you are looking for a next generation conceptual horse to bet on, I don’t think GRC is it, I would look at information survivability. Hoff’s information survivability primer is a great starting point for learning about survivability.

Why survivability is more valuable over the long haul than GRC is that survivability is focused on assets not focused on giving an auditor what they need, but giving the business what it needs.

Seminal paper on survivability by Lipson, et al. "survivability solutions are best understood as risk management strategies that first depend on an intimate knowledge of the mission being protected." Make a difference – asset focus, not auditor focus.

For obvious reasons, I am compelled to say "me, too."

I would really like to talk to someone in a large enterprise who is using one of these GRC suites — I don’t really care which department you’re from.  I just want to examine my assertions and compare them against my efforts and understanding.

/Hoff