Archive

Archive for the ‘Disruptive Innovation’ Category

Incomplete Thought: Why We Need Open Source Security Solutions More Than Ever…

July 17th, 2010 1 comment
Illustrates a rightward shift in the demand curve.
Image via Wikipedia

I don’t have time to write a big blog post and quite frankly, I don’t need to. Not on this topic.

I do, however, feel that it’s important to bring back into consciousness how very important open source security solutions are to us — at least those of us who actually expect to make an impact in our organizations and work toward making a dent in our security problem pile.

Why do open source solutions matter so much in our approach to dealing with securing the things that matter most to us?

It comes down to things we already know but are often paralyzed to do anything about:

  1. The threat curve and innovation of attacker outpaces that of the defender by orders of magnitudes (duh)
  2. Disruptive technology and innovation dramatically impacts the operational, threat and risk modeling we have to deal with (duh duh)
  3. The security industry is not in the business of solving security problems that don’t have a profit motive/margin attached to it (ugh)

We can’t do much about #1 and #2 except be early adopters, by agile/dynamic and plan for change. I’ve written about this many times and built and entire series of talks presentations (Security and Disruptive Innovation) that Rich Mogull and I have taken to updating over the last few years.

We can do something about #3 and we can do it by continuing to invest in the development, deployment, support, and perhaps even the eventual commercialization of open source security solutions.

To be clear, it’s not that commercialization is required for success, but often it just indicates it’s become mainstream and valued and money *can* be made.)

When you look at the motivation most open source project creators bring a solution to market, it’s because the solution generally is not commercially available, it solves an immediate need and it’s contributed to by a community. These are all fantastic reasons to use, support, extend and contribute back to the open source movement — even if you don’t code, you can help by improving the roadmaps of these projects by making suggestions and promoting their use.

Open source security solutions deliver and they deliver quickly because the roadmaps and feature integration occur in an agile, meritocratic and vetted manner than often times lacks polish but delivers immediate value — especially given their cost.

We’re stuck in a loop (or a Hamster Sine Wave of Pain) because the problems we really need to solve are not developed by the companies that are in the best position to develop them in a timely manner. Why? Because when these emerging solutions are evaluated, they live or die by one thing: TAM (total addressable market.)

If there’s no big $$$ attached and someone can’t make the case within an organization that this is a strategic (read: revenue generating) big bet, the big companies wait for a small innovative startup to develop technology (or an open source tool,) see if it lives long enough for the market demand to drive revenues and then buy them…or sometimes develop a competitive solution.

Classical crossing the chasm/Moore stuff.

The problem here is that this cycle is broken horribly and we see perfectly awesome solutions die on the vine. Sometimes they come back to life years later cyclically when the pain gets big enough (and there’s money to be made) or the “market” of products and companies consolidate, commoditize and ultimately becomes a feature.

I’ve got hundreds of examples I can give of this phenomenon — and I bet you do, too.

That’s not to say we don’t have open-source-derived success stories (Snort, Metasploit, ClamAV, Nessus, OSSec, etc.) but we just don’t have enough of them. Further, there are disruptions such as virtualization and cloud computing that fundamentally change the game that we can harness in conjunction with open source solutions that can accelerate the delivery and velocity of solutions because of how impacting the platform shift can be.

I’ve also got dozens of awesome ideas that could/would fundamentally solve many attendant issues we have in security — but the timing, economics, culture, politics and readiness/appetite for adoption aren’t there commercially…but they can be via open source.

I’m going to start a series which identifies and highlights solutions that are either available as kernel-nugget technology or past-life approaches that I think can and should be taken on as open source projects that could fundamentally help our cause as a community.

Maybe someone can code/create open source solutions out of them that can help us all.  We should encourage this behavior.

We need it more than ever now.

/Hoff

Enhanced by Zemanta

Incomplete Thought: “The Cloud in the Enterprise: Big Switch or Little Niche?”

April 19th, 2010 1 comment

Joe Weinman wrote an interesting post in advance of his panel at Structure ’10 titled “The Cloud in the Enterprise: Big Switch or Little Niche?” wherein he explored the future of Cloud adoption.

In this blog, while framing the discussion with Nick Carr‘s (in)famous “Big Switch” utility analog, he asks the question:

So will enterprise cloud computing represent The Big Switch, a dimmer switch or a little niche?

…to which I respond:

I think it will be analogous to the “Theory of Punctuated Equilibrium,” wherein we see patterns not unlike classical dampened oscillations with many big swings ultimately settling down until another disruption causes big swings again.  In transition we see niches appear until they get subsumed in the uptake.

Or, in other words such as those I posted on Twitter: “…lots of little switches AND big niches

Go see Joe’s panel. Better yet, comment on your thoughts here. 😉

/Hoff

Related articles by Zemanta

Reblog this post [with Zemanta]

Chattin’ With the Boss: “Securing the Network” (Waiting For the Jet Pack)

March 7th, 2010 8 comments

At the RSA security conference last week I spent some time with Tom Gillis on a live uStream video titled “Securing the Network.”

Tom happens to be (as he points out during a rather funny interlude) my boss’ boss — he’s the VP and GM of Cisco‘s STBU (Security Technology Business Unit.)

It’s an interesting discussion (albeit with some self-serving Cisco tidbits) surrounding how collaboration, cloud, mobility, virtualization, video, the consumerizaton of IT and, um, jet packs are changing the network and how we secure it.

Direct link here.

Embedded below:

Reblog this post [with Zemanta]

Slides from My Cloud Security Alliance Keynote: The Cloud Magic 8 Ball (Future Of Cloud)

March 7th, 2010 No comments

Here are the slides from my Cloud Security Alliance (CSA) keynote from the Cloud Security Summit at the 2010 RSA Security Conference.

The punchline is as follows:

All this iteration and debate on the future of the “back-end” of Cloud Computing — the provider side of the equation — is ultimately less interesting than how the applications and content served up will be consumed.

Cloud Computing provides for the mass re-centralization of applications and data in mega-datacenters while simultaneously incredibly powerful mobile computing platforms provide for the mass re-distribution of (in many cases the same) applications and data.  We’re fixated on the security of the former but ignoring that of the latter — at our peril.

People worry about how Cloud Computing puts their applications and data in other people’s hands. The reality is that mobile computing — and the clouds that are here already and will form because of them — already put, quite literally, those applications and data in other people’s hands.

If we want to “secure” the things that matter most, we must focus BACK on information centricity and building survivable systems if we are to be successful in our approach.  I’ve written about the topics above many times, but this post from 2009 is quite apropos: The Quandary Of the Cloud: Centralized Compute But Distributed Data You can find other posts on Information Centricity here.

Slideshare direct link here (embedded below.)

Reblog this post [with Zemanta]

Calling All Private Cloud Haters: Amazon Just Peed On Your Fire Hydrant…

August 26th, 2009 15 comments

Werner Vogels brought a smile to my face today with his blog titled “Seamlessly Extending the Data Center – Introducing Amazon Virtual Private Cloud.”  In short:

We have developed Amazon Virtual Private Cloud (Amazon VPC) to allow our customers to seamlessly extend their IT infrastructure into the cloud while maintaining the levels of isolation required for their enterprise management tools to do their work.

In one fell swoop, AWS has:

  • Legitimized Private Cloud as a reasonable, needed, and prudent step toward Cloud adoption for enterprises,
  • Substantiated the value proposition of Private Cloud as a way of removing a barrier to Cloud entry for enterprises, and
  • Validated the ultimate vision toward hybrid Clouds and Inter-Cloud

They made this announcement from the vantage point of operating as a Public Cloud provider — in many cases THE Public Cloud provider of choice for those arguing from an exclusionary perspective that Public Cloud is the only way forward.

Now, it’s pretty clear on AWS’ position on Private Cloud; straight form the horse’s mouth Werner says “Private Cloud is not the Cloud” (see below) — but it’s also clear they’re willing to sell you some 😉

The cost for VPC isn’t exorbitant, but it’s not free, either, so the business case is clearly there (see the official VPC site)– VPN connectivity is $0.05 per VPN connection with data transfer rates of $0.10 per GB inbound and ranging from $0.17 per GB – $0.10 per GB outbound depending upon volume (with heavy data replication or intensive workloads people are going to need to watch the odometer.)

I’m going to highlight a couple of nuggets from his post:

We continuously listen to our customers to make sure our roadmap matches their needs. One important piece of feedback that mainly came from our enterprise customers was that the transition to the cloud of more complex enterprise environments was challenging. We made it a priority to address this and have worked hard in the past year to find new ways to help our customers transition applications and services to the cloud, while protecting their investments in their existing IT infrastructure. …

Private Cloud Is Not The Cloud – These CIOs know that what is sometimes dubbed “private cloud” does not meet their goal as it does not give them the benefits of the cloud: true elasticity and capex elimination. Virtualization and increased automation may give them some improvements in utilization, but they would still be holding the capital, and the operational cost would still be significantly higher.

We have been listening very closely to the real requirements that our customers have and have worked closely with many of these CIOs and their teams to understand what solution would allow them to treat the cloud as a seamless extension of their datacenter, where their standard management practices can be applied with limited or no modifications. This needs to be a solution where they get all the benefits of cloud as mentioned above [Ed: eliminates cost, elastic, removes “undifferentiated heavy lifting”] while treating it as a part of their datacenter.

We have developed Amazon Virtual Private Cloud (Amazon VPC) to allow our customers to seamlessly extend their IT infrastructure into the cloud while maintaining the levels of isolation required for their enterprise management tools to do their work.

With Amazon VPC you can:

  • Create a Virtual Private Cloud and assign an IP address block to the VPC. The address block needs to be CIDR block such that it will be easy for your internal networking to route traffic to and from the VPC instance. These are addresses you own and control, most likely as part of your current datacenter addressing practice.
  • Divide the VPC addressing up into subnets in a manner that is convenient for managing the applications and services you want run in the VPC.
  • Create a VPN connection between the VPN Gateway that is part of the VPC instance and an IPSec-based VPN router on your own premises. Configure your internal routers such that traffic for the VPC address block will flow over the VPN.
  • Start adding AWS cloud resources to your VPC. These resources are fully isolated and can only communicate to other resources in the same VPC and with those resources accessible via the VPN router. Accessibility of other resources, including those on the public internet, is subject to the standard enterprise routing and firewall policies.

Amazon VPC offers customers the best of both the cloud and the enterprise managed data center:

  • Full flexibility in creating a network layout in the cloud that complies with the manner in which IT resources are managed in your own infrastructure.
  • Isolating resources allocated in the cloud by only making them accessible through industry standard IPSec VPNs.
  • Familiar cloud paradigm to acquire and release resources on demand within your VPC, making sure that you only use those resources you really need.
  • Only pay for what you use. The resources that you place within a VPC are metered and billed using the familiar pay-as-you-go approach at the standard pricing levels published for all cloud customers. The creation of VPCs, subnets and VPN gateways is free of charge. VPN usage and VPN traffic are also priced at the familiar usage based structure

All the benefits from the cloud with respect to scalability and reliability, freeing up your engineers to work on things that really matter to your business.

Jeff Barr did a great job of giving a little more detail on his blog but also brought up a couple of points I need to noodle on from a security perspective:

Because the VPC subnets are used to isolate logically distinct functionality, we’ve chosen not to immediately support Amazon EC2 security groups. You can launch your own AMIs and most public AMIs, including Microsoft Windows AMIs. You can’t launch Amazon DevPay AMIs just yet, though.

The Amazon EC2 instances are on your network. They can access or be accessed by other systems on the network as if they were local. As far as you are concerned, the EC2 instances are additional local network resources — there is no NAT translation. EC2 instances within a VPC do not currently have Internet-facing IP addresses.

We’ve confirmed that a variety of Cisco and Juniper hardware/software VPN configurations are compatible; devices meeting our requirements as outlined in the box at right should be compatible too. We also plan to support Software VPNs in the near future.

The notion of the VPC and associated VPN connectivity coupled with the “software VPN” statement above reminds me of Cohesive F/T’s VPN-Cubed solution.  While this is an IaaS-focused discussion, it’s only fair to bring up Google’s Secure Data Connector that was announced some moons ago from a SaaS/PaaS perspective, too.

I would be remiss in my musings were I not to also suggest that Cloud brokers and Cloud service providers such as RightScale, GoGrid, Terremark, etc. were on the right path in responding to customers’ needs well before this announcement.

Further, it should be noted that now that the 800lb Gorilla has staked a flag, this will bring up all sorts of additional auditing and compliance questions, as any sort of broad connectivity into and out of security zones and asset groupings always do.  See the PCI debate (How to Be PCI Compliant In the Cloud)

At the end of the day, this is a great step forward toward — one I am happy to say that I’ve been talking about and presenting (see my Frogs presentation) for the last two years.

/Hoff

Most CIO’s Not Sold On Cloud? Good, They Shouldn’t Be…

June 7th, 2009 13 comments

I find it amusing that there is so much drama surrounding the notion of Cloud adoption.

There are those who paint Cloud as the savior of today’s IT great unwashed and others who claim it’s simply hype and not ready for prime time.

They’re both right and Cloud adoption is exactly where it should be today.

Here’s a great illustration: “Cloud or Fog? Two-Thirds of UK CIOs and CFOs Not Yet Sold on Cloud“:

Sixty-seven per cent of Chief Information Officers and Chief Financial Officers in UK enterprises say they are either not planning to adopt cloud computing (35 per cent) or are unsure (32 per cent) of whether their company will adopt cloud computing during the next two years, according to a major new report from managed hosting (http://www.ntteuropeonline.com/) specialists NTT Europe Online.

Whose perspective you share comes down to well-established market dynamics relating to technology adoption and should not come as a surprise to anyone.

One of the best-known examples of this can be visualized a by a graphical representation of what Geoffrey Moore wrote about it in his book “Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers“:

techadoptioncurve

Because I’m lazy, I’ll just refer you to the Wikipedia entry which describes “the Chasm” and the technology adoption lifecycle:

In Crossing the Chasm, Moore begins with the diffusion of innovations theory from Everett Rogers, and argues there is a chasm between the early adopters of the product (the technology enthusiasts and visionaries) and the early majority (the pragmatists). Moore believes visionaries and pragmatists have very different expectations, and he attempts to explore those differences and suggest techniques to successfully cross the “chasm,” including choosing a target market, understanding the whole product concept, positioning the product, building a marketing strategy, choosing the most appropriate distribution channel and pricing.

Crossing the Chasm is closely related to the Technology adoption lifecycle where five main segments are recognized; innovators, early adopters, early majority, late majority and laggards. According to Moore, the marketer should focus on one group of customers at a time, using each group as a base for marketing to the next group. The most difficult step is making the transition between visionaries (early adopters) and pragmatists (early majority). This is the chasm that he refers to. If a successful firm can create a bandwagon effect in which the momentum builds and the product becomes a de facto standard. However, Moore’s theories are only applicable for disruptive or discontinuous innovations. Adoption of continuous innovations (that do not force a significant change of behavior by the customer) are still best described by the original Technology adoption lifecycle. Confusion between continuous and discontinuous innovation is a leading cause of failure for high tech products.

Cloud is firmly entrenched in the Chasm, clawing its way out as the market matures*.

It will, over the next 18-24 months by my estimates arrive at the early majority phase.

Those who are today evangelizing Cloud Computing are the “technology enthusiasts” and “visionaries” in the “innovator” and “early adopter” phases respectively.  If you look at the article I quoted at the top of the blog, CIO’s are generally NOT innovators or early adopters, so…

So don’t be put off or overly excited when you see hyperbolic references to Cloud adoption because depending upon who you are and who you’re talking about, you’ll likely always get a different perspective for completely natural reasons.

/Hoff

* To be clear, I wholeheartedly agree with James Urquhart that “Cloud” is not a technology, it’s an operational model. So as not to confuse people, within the context of the “technology adoption curve” above you can likewise see how “model” or “paradigm” works, also.  It doesn’t really have to be limited to a pure technology.

Dear Mr. Schneier, If Cloud Is Nothing New, Why Are You Talking So Much About It?

June 3rd, 2009 13 comments

squidly

Update: Please see this post if you’re wondering why I edited this piece.

I read a recent story in the Guardian from Bruce Schneier titled “Be Careful When You Come To Put Your Trust In the Clouds” in which he suggests that Cloud Computing is “…nothing new.”

Fundamentally it’s hard to argue with that title as clearly we’ve got issues with security and trust models as it relates to Cloud Computing, but the byline seems to be at odds with Schneier’s ever-grumpy dismissal of Cloud Computing in the first place.  We need transparency and trust: got it.

Many of the things Schneier says make perfect sense whilst others just make me scratch my head in abstract.  Let’s look at a couple of them:

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

Clearly there is a lot of hype around Cloud Computing, but I believe it’s important — especially as someone who spends a lot of time educating and evangelizing — that people like myself and Schneier effectively separate the hype from the hope and try and paint a clearer picture of things.

To that point, Schneier does his audience a disservice by dumbing down Cloud Computing to nothing more than outsourcing via SaaS.  Throwing the baby out with the rainwater seems a little odd to me and while it’s important to relate to one’s audience, I keep sensing a strange cognitive dissonance whilst reading Schneier’s opining on Cloud.

Firstly, and as I’ve said many times, Cloud Computing is more than just Software as a Service (SaaS.)  SaaS is clearly the more mature and visible set of offerings in the evolving Cloud Computing taxonomy today, but one could argue that players like Amazon with their Infrastructure as a Service (IaaS) or even the aforementioned Google and Salesforce.com with the Platform as a Service (PaaS) offerings might take umbrage with Schneier’s suggestion that Cloud is simply some “…software over the internet” accessed “…via a browser.”

Overlooking IaaS and PaaS is clearly a huge miss here and it calls into question the point Schneier makes when he says:

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing – network infrastructure, security monitoring, remote hosting – is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

<sigh> Welcome to the evolution of technology and disruptive innovation.  What’s the point?

Fundamentally, as we look beyond speeds and feeds, Cloud Computing — at all layers and offering types — is driving huge headway and innovation in the evolution of automation, autonomics and the applied theories of dealing with massive scale in compute, network and storage realms.  Sure, the underlying problems — and even some of the approaches — aren’t new in theory, but they are in practice.  The end result may very well be that a consumer of service may not see elements that are new technologically as they are abstracted, but the economic, cultural, business and operational differences are startling.

If we look at what makes up Cloud Computing, the five elements I always point to are:

cloud-keyingredients018

Certainly the first three are present today — and have been for some while — in many different offerings.  However, combining the last two: on-demand, self-service scale and dynamism with new economic models of consumption and allocation are quite different, especially when doing so at extreme levels of scale with multi-tenancy.

So let’s get to the meat of the matter: security and trust.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors – and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further – you now have to also trust your software service vendors – but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

Fair enough.  So let’s chalk one up here to “Cloud is nothing new — we still have to put our faith and trust in someone else.”  Got it.  However, by again excluding the notion of PaaS and IaaS, Bruce fails to recognize the differences in both responsibility and accountability that these differing models brings; limiting Cloud to SaaS while simple for cute argument does not a complete case make:

cloud-lower030

To what level you are required to and/or feel comfortable transferring responsibility depends upon the provider and the deployment model; the risks associated with an IaaS-based service can be radically different than that of one from a SaaS vendor. With SaaS, security can be thought of from a monolithic perspective — that of the provider; they are responsible for it.  In the case of PaaS and IaaS, this trade-off’s become more apparent and you’ll find that this “outsourcing” of responsibility is diminished whilst the mantle of accountability is not.  This is pretty important if you want ot be generic in your definition of “Cloud.”

Here’s where I see Bruce going off the rails from his “Cloud is nothing new” rant, much in the same way I’d expect he would suggest that virtualization is nothing new, either:

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.


Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

So therefore I see a huge contradiction.  How we secure — or allow others to — our data is very different in Cloud, it *is* something new in its practical application.   There are profound operational, business and technical (let alone regulatory, legal, governance, etc.) differences that do pose new challenges. Yes, we should take our best practices related to “outsourcing” that we’ve built over time and apply them to Cloud.  However, the collision course of virtualization, converged fabrics and Cloud Computing are pushing the boundaries of all we know.

Per the examples above, our challenges are significant.  The tech industry thrives on the ebb and flow of evolutionary punctuated equilibrium; what’s old is always new again, so it’s important to remember a couple of things:

  1. Harking back (a whopping 60 years) to the “dawn of time” in the IT/Computing industry making the case that things “aren’t new” is sort of silly and simply proves you’re the tallest and loudest guy in a room full of midgets.  Here’s your sign.
  2. I don’t see any suggestions for how to make this better in all these rants about mainframes, only FUD
  3. If “outsourcing is the future of computing” and we are to see both evolutionary and revolutionary disruptive innovation, shouldn’t we do more than simply hope that “…eventually we’ll get this right?”

The past certainly repeats itself, which explains why every 20 years bell-bottoms come back in style…but ignoring the differences in application, however incremental, is a bad idea.  In many regards we have not learned from our mistakes or fail to recognize patterns, but you can’t drive forward by only looking in the rear view mirror, either.

Regards,

/Hoff

Security Analyst Sausage Machine Firms Quash Innovation

July 10th, 2008 15 comments

Tackle
Quis custodiet ipsos custodes? Who will watch the watchers?

Short and sweet and perhaps a grumpy statement of the obvious: Security Analyst Sausage Machine Firms quash innovation in vendors’ development cycles and in many cases prevent the consumer — their customers — from receiving actual solutions to real problems because of the stranglehold they maintain on what defines and categorizes a "solution."

What do I mean?

If you’re a vendor — emerging or established — and create a solution that is fantastic and solves real business problems but doesn’t fit neatly within an existing "quadrant," "cycle," "scope," or "square," you’re SCREWED.  You may sell a handful of your widgets to early adopters, but your product isn’t real unless an analyst says it is and you still have money in the bank after a few years to deliver it.

If you’re a customer, you may never see that product develop and see the light of day and you’re the ones who pay your membership dues to the same analyst firms to advise you on what to do!

I know that we’ve all basically dropped trow and given in to the fact that we’ve got to follow the analyst hazing rituals, but that doesn’t make it right.  It really sucks monkey balls.

What’s funny to me is that we have these huge lawsuits filed against corporations for anti-trust and unfair business practices, and there’s nobody who contests this oligopoly from the sausage machine analysts — except for other former analysts who form their own analyst firms to do battle with their former employers…but in a kindler, gentler, "advisory" capacity, of course…

Speaking of which, some of these folks who lead these practices often times have never used, deployed, tested, or sometimes even seen the products they take money for and advise their clients on.  Oh, and objectivity?  Yeah, right.  If an analyst doesn’t like your idea, your product, your philosophy, your choice in clothing or you, you’re done.

This crappy system stifles innovation, it grinds real solutions into the dirt such that small startups that really could be "the next big thing" often are now forced to be born as seed technology starters for larger companies to buy for M&A pennies so they can slow-roll the IP into the roadmaps over a long time and smooth the curve once markets are "mature."

Guess who defines them as being "mature?"  Right.

Crossing the chasm?  Reaching the tipping point?  How much of that even matters anymore?

Ah, the innovator’s dilemma…

If you have a product that well and truly does X, Y and Z, where X is a feature that conforms and fits into a defined category but Y and Z — while truly differentiating and powerful — do not, you’re forced to focus on, develop around and hype X, label your product as being X, and not invest as much in Y and Z.

If you miss the market timing and can’t afford to schmooze effectively and don’t look forward enough with a business model that allows for flexibility, you may make the world’s best X, but when X commoditizes and Y and Z are now the hottest "new" square, chances are you won’t matter anymore, even if you’ve had it for years.

The product managers, marketing directors and salesfolk are forced to
fit a product within an analyst’s arbitrary product definition or risk
not getting traction, miss competitive analysis/comparisons or even get
funding; ever try to convince a VC that they should fund you when
you’re the "only one" in the space and there’s no analyst recognition
of a "market?"

Yech.

A vendor’s excellent solution can simply wither and die on the vine in
a battle of market definition attrition because the vendor is forced to
conform and neuter a product in order to make a buck and can’t actually
differentiate or focus on the things that truly make it a better
solution.

Who wins here? 

Not the vendors.  Not the customers. The analysts do. 

The vendor pays them a shitload of kowtowing and money for the privilege to show up in a box so they get recognized — and not necessarily for the things that truly matter — until the same analyst changes his/her mind and recognizes that perhaps Y and Z are "real" or creates category W, and the vicious cycle starts anew.

So while you’re a vendor struggling to make a great solution or a customer trying to solve real business problems, who watches the watchers?

/Hoff

Security Will Not End Up In the Network…

June 3rd, 2008 9 comments

Secdeadend
It’s not the destination, it’s the journey, stupid.

You can’t go a day without reading from the peanut gallery that it is
"…inevitable that network security will eventually be subsumed into
the network fabric."  I’m not picking on Rothman specifically, but he’s been banging this drum loudly of late.

For such a far-reaching, profound and prophetic statement, claims like these are strangely myopic and inaccurate..and then they’re exactly right.

Confused?

Firstly, it’s sort of silly and obvious to trumpet that "network security" will end up in the "network."  Duh.  What’s really meant is that "information security" will end up in the network, but that’s sort of goofy, too. You’ll even hear that "host-based security" will end up in the network…so let’s just say that what’s being angled at here is that security will end up in the network.

These statements are often framed within a temporal bracket
that simply ignores the bigger picture and reads like a eulogy.  The reality is that historically
we have come to accept that security and technology are
cyclic and yet we continue to witness these terminal predictions defining an end state for security that has never arrived and never will.


Let me make plain my point: there is no final resting place for where and how security will "end up."

I’m visual, so let’s reference a very basic representation of my point.  This graph represents the cyclic transition over time of where and how
we invest in security.

We ultimately transition between host-based security,
information-centric security and network security over time. 

We do this little
shuffle based upon the effectiveness and maturity of technology,
economics, cultural, societal and regulatory issues and the effects of disruptive innovation.  In reality, this
isn’t a smooth sine wave at all, it’s actually more a classic dampened
oscillation ala the punctuated equilibrium theory I’ve spoken about
before
, but it’s easier to visualize this way.

Youarehere_3

Our investment strategy and where security is seen as being "positioned" reverses direction over time and continues ad infinitum.  This has proven itself time and time again yet we continue to be wowed by the prophetic utterances of people who on the one hand talk about these never-ending cycles and yet on the other pretend they don’t exist by claiming the "death" of one approach over another. 
 

Why?

To answer that let’s take a look at how the cyclic pendulum effect of our focus on
security trends from the host to the information to the network and
back again by analyzing the graph above. 

  1. If we take a look at the arbitrary "starting" point indicated by the "You Are Here" dot on the sine wave above, I suggest that over the last 2-3 years or so we’ve actually headed away from the network as the source of all things security.   

    There are lots of reasons for this; economic, ideological, technological, regulatory and cultural.  If you want to learn more about this, check out my posts on how disruptive Innovation fuels strategic transience.

    In short, the network has not been able to (and never will) deliver the efficacy, capabilities or
    cost-effectiveness desired to secure us from evil, so instead we look at
    actually securing the information itself.  The security industry messaging of late is certainly bearing testimony to that fact.  Check out this year’s RSA conference…
     

  2. As we focus then on information centricity, we see the resurgence of ERM, governance and compliance come into focus.  As policies proliferate, we realize that this is really hard and we don’t have effective and ubiquitous data
    classification, policy affinity and heterogeneous enforcement capabilities.  We shake our heads at the ineffectiveness of the technology we have and hear the cries of pundits everywhere that we need to focus on the things that really matter…

    In order to ensure that we effectively classify data at the point of creation, we recognize that we can’t do this automagically and we don’t have standardized schemas or metadata across structured and unstructured data, so we’ll look at each other, scratch our heads and conclude that the applications and operating systems need modification to force fit policy, classification and enforcement.

    Rot roh.
     

  3. Now that we have the concept of policies and classification, we need the teeth to ensure it, so we start to overlay emerging technology solutions on the host in applications and via the OS’s that are unfortunately non-transparent and affect the users and their ability to get their work done.  This becomes labeled as a speed bump and we grapple with how to make this less impacting on the business since security has now slowed things down and we still have breaches because users have found creative ways of bypassing technology constraints in the name of agility and efficiency…
     
  4. At this point, the network catches up in its ability to process closer to "line
    speed," and some of the data classification functionality from the host commoditizes into the "network" — which by then is as much in the form of appliances as it is routers and switches — and always
    will be.   So as we round this upturn focusing again on being "information centric," with the help of technology, we seek to use our network investment to offset impact on our users.
     
  5. Ultimately, we get the latest round of "next generation" network solutions which promise to deliver us from our woes, but as we "pass go and collect $200" we realize we’re really at the same point we were at point #1.

‘Round and ’round we go.

So, there’s no end state.  It’s a continuum.  The budget and operational elements of who "owns" security and where it’s implemented simply follow the same curve.  Throw in disruptive innovation such as virtualization, and the entire concept of the "host" and the "network" morphs and we simply realize that it’s a shift in period on the same graph.

So all this pontification that it is "…inevitable that network security will eventually be subsumed into
the network fabric" is only as accurate as what phase of the graph you reckon you’re on.  Depending upon how many periods you’ve experienced, it’s easy to see how some who have not seen these changes come and go could be fooled into not being able to see the forest for the trees.

Here’s the reality we actually already know and should not come to you as a surprise if you’ve been reading my blog: we will always need a blended investment in technology, people and process in order to manage our risk effectively.  From a technology perspective, some of this will take the form of controls embedded in the information itself, some will come from the OS and applications and some will come from the network.

Anyone who tells you differently has something to sell you or simply needs a towel for the back of his or her ears…

/Hoff

The Challenge of Virtualization Security: Organizational and Operational, NOT Technical

March 25th, 2008 7 comments

Bullfight
Taking the bull by the horns…

I’ve spoken many times over the last year on the impact virtualization brings to the security posture of organizations.  While there are certainly technology issues that we must overcome, we don’t have solutions today that can effectively deliver us from evil. 

Anyone looking for the silver bullet is encouraged to instead invest in silver buckshot.  No shocker there.

There are certainly technology and solution providers looking to help solve these problems, but honestly, they are constrained by the availability and visibility to the VMM/Hypervisors of the virtualization platforms themselves. 

Obviously announcements like VMware’s VMsafe will help turn that corner, but VMsafe requires re-tooling of ISV software and new versions of the virtualization platforms.  It’s a year+ away and only addresses concerns for a single virtualization platform provider (VMware) and not others.

The real problem of security in a virtualized world is not technical, it is organizational and operational.

With the consolidation of applications, operating systems, storage, information, security and networking — all virtualized into a single platform rather than being discretely owned, managed and supported by (reasonably) operationally-mature teams — the biggest threat we face in virtualization is now we have lost not only visibility, but the clearly-defined lines of demarcation garnered from a separation of duties we had in the non-virtualized world.

Many companies have segmented off splinter cells of "virtualization admins" from the server teams and they are often solely responsible for the virtualization platforms which includes the care, feeding, diapering and powderering of not only the operating systems and virtualization platforms, but the networking and security functionality also.

No offense to my brethren in the trenches, but this is simply a case of experience and expertise.  Server admins are not experts in network or security architectures and operations, just as the latter cannot hope to be experts in the former’s domain.

We’re in an arms race now where virtualization brings brilliant flexibility, agility and cost savings to the enterprise, but ultimately further fractures the tenuous relationships between the server, network and security teams.

Now that the first-pass consolidation pilots of virtualizing non-critical infrastructure assets has been held up as beaconing examples of ROI in our datacenters, security and networking teams are exercising their veto powers as virtualization efforts creep towards critical production applications, databases and transactional systems.

Quite simply, the ability to express risk, security posture, compliance, troubleshooting and measureing SLA’s and dependencies within the construct of a virtualized world is much more difficult than in the discretely segregated physical world and when taken to the mat on the issues, the virtual server admins simply cannot address these issues competently within the scope of language of the security and risk teams.

This is going to make for some unneeded friction in what was supposed to be a frictionless effort.  If you thought the security teams were thought of as speed bumps before, you’re not going to like what happens soon when they try to delay/halt a business-driven effort to reduce costs, speed time-to-market, increase availability and enable agility.

I’ll summarize my prior recommendations as to how to approach this conundrum in a follow-on post, but the time is now to get these teams together and craft the end-play strategies and desired end-states for enterprise architecture in a virtualized world before we end up right back where we started 15+ years ago…on the hamster wheel of pain!

/Hoff