Archive

Author Archive

Is What We Need…An OpSec K/T Boundary Extinction-Level Event?

June 21st, 2012 1 comment

Tens of millions of Aons (a new quantification of time based on Amazon Web Services AMI spin-ups) from now, archeologists and technosophers will look back on the inevitable emergence of Cloud in the decade following the double-oughts and muse about the mysterious disappearance of the security operations species…

Or not.

The “Cloud Security, Meh!” crowd are an interesting bunch. They don’t seem to like change much.  To be fair, they’re not incentivized to.  However, while difficult, change is good…it just takes a lot to understand that some times.

It occurs to me that if we expect behavior to change in the way in which we approach “security,” it must start with a reset of expectations surrounding how we evaluate outcomes, how we’re measured, and most importantly the actual security leadership itself must change.

Most seasoned CxOs these days that have been in the business for 15+ years are in their late 30’s/early 40’s.  Most of “us” — from official scientifical research I have curated [at the bar] — came from System Administrator/Network Administrator roles back in the 80’s/90’s.

Now, what’s intriguing is that back then, “security” was just one functional component and responsibility of many duties slapped on the back of overworked and underfunded “router jockeys” or “unix neckbearders.”  Back in the day we did it all — we managed the network, massaged the Solaris/NT boxes, helped deploy and manage the apps and were responsible for “securing” it all as we connected stuff to the Internet.

You know, like, um, DevOps.

So today in larger organizations (notsomuch in smaller orgs/startups,) we have a raging rejection of this generalized approach to service delivery/IT by the VERY SAME individuals who arose phoenix-like from the crater left when the Internet exploded and the rampant adoption of technology and siloed operational models became “best practice.” Compliance didn’t help.  Then they got promoted.

In many cases then, the bristled reaction by security folks to things like virtualization, Cloud, Agile, DevOps, etc. is highly generational.  The up-and-coming rank-in-file digital natives who are starting to break into the industry will know these things as “normal,” much like a preschooler uses gestures on an iPad…it just…is.

However, their leadership — “us” — the 40+ year olds that are large and in charge are busy barking that youngsters should get off our IT lawn.  This is very much a generational issue.

So I think what that means is that ultimately we’re waiting for our own version of the K/T boundary extinction-level “opportunity,”  the horizon event at the boundary of the Cretacious/Tertiary periods 65 million years ago where almost all of the Earth’s large vertebrates — all dinosaurs, plesiosaurs, mosasaurs, and pterosaurs — suddenly became extinct.  Boom.  Gone.  Damned meteorites.

Now, unless the next great piece of malware can target, infect and destroy humans as we Bing/Google/click our way into stupidity (coming next week from Iran?) ala Stuxnet/Flame, we’re not going to see these stodgy C(I)SOs vanish instantly, but over the next two decades, we’ll see a new generation arise who think, act and believe differently than we do today…I just hope it doesn’t take that long.

This change…it’s natural. It’s evolution, and patterns like these repeat (see the theory of punctuated equilibrium) even in the face of revolution.  It’s messy.

More often than not, it’s not the technology that’s the problem with “security” when we hit one of these inflection points in computing. No, it’s the organizational, operational, cultural, fiscal, and (dare I say) religious issues that hold us back.  Innovation breeds more innovation unless it’s shackled by people who can’t think outside of the box.

That right there is what defines a dino/plesio/mosa/ptero-saur.

Come to think of it, maybe we do need an OpSec extinction-level event to move us forward instead of waiting 20 years for the AARP forced slide to Florida.

Or, in the words of Gunny Highway from Heartbreak Ridge, we must “Improvise, adapt and overcome.”

If that’s not a DevOps Darwinian double-entendre, I don’t know what is 😉

Don’t be a dinosaur.

/Hoff

 

 

PrivateCore: Another Virtualization-Enabled Security Solution Launches…

June 21st, 2012 No comments

On the heels of Bromium’s coming-out party yesterday at Gigamon’s Structure conference, PrivateCore — a company founded by VMware vets Oded Horovitz and Carl Waldspurger and Google’s Steve Weis — announced a round of financing and what I interpret as a more interesting and focused Raison d’être.

Previously in videos released by Oded, he described the company’s focus around protecting servers (cloud, otherwise) against physical incursion whilst extracting contents from memory, etc. where physical access is required.

From what I could glean, the PrivateCore solution utilizes encryption and CPU cache (need to confirm) to provide memory isolation to render these attack vectors moot.

What’s interesting is the way in which PrivateCore is now highlighting the vehicle for their solution; a “hardened hypervisor.”

It will be interesting to see how well they can market this approach/technology (and to whom,) what sort of API/management planes their VMM provides and how long they stand-alone before being snapped up — perhaps even by VMware or Citrix.

More good action (and $2.25M in funding) in the virtual security space.

/Hoff

Enhanced by Zemanta

Elemental: Leveraging Virtualization Technology For More Resilient & Survivable Systems

June 21st, 2012 Comments off

Yesterday saw the successful launch of Bromium at Gigamon’s Structure conference in San Francisco.

I was privileged to spend some stage time with Stacey Higginbotham and Simon Crosby (co-founder, CTO, mentor and good friend) on stage after Simon’s big reveal of Bromium‘s operating model and technology approach.

While product specifics weren’t disclosed, we spent some time chatting about Bromium’s approach to solving a particularly tough set of security challenges with a focus on realistic outcomes given the advanced adversaries and attack methodologies in use today.

At the heart of our discussion* was the notion that in many cases one cannot detect let alone prevent specific types of attacks and this requires a new way of containing the impact of exploiting vulnerabilities (known or otherwise) that are as much targeting the human factor as they are weaknesses in underlying operating systems and application technologies.

I think Kurt Marko did a good job summarizing Bromium in his article here, so if you’re interested in learning more check it out. I can tell you that as a technology advisor to Bromium and someone who is using the technology preview, it lives up to the hype and gives me hope that we’ll see even more novel approaches of usable security leveraging technology like this.  More will be revealed as time goes on.

That said, with productization details purposely left vague, Bromium’s leveraged implementation of Intel’s VT technology and its “microvisor” approach brought about comments yesterday from many folks that reminded them of what they called “similar approaches” (however right/wrong they may be) to use virtualization technology and/or “sandboxing” to provide more “secure” systems.  I recall the following in passing conversation yesterday:

  • Determina (VMware acquired)
  • Green Borders (Google acquired)
  • Trusteer
  • Invincea
  • DeepSafe (Intel/McAfee)
  • Intel TXT w/MLE & hypervisors
  • Self Cleansing Intrusion Tolerance (SCIT)
  • PrivateCore (Newly launched by Oded Horovitz)
  • etc…

I don’t think Simon would argue that the underlying approach of utilizing virtualization for security (even for an “endpoint” application) is new, but the approach toward making it invisible and transparent from a user experience perspective certainly is.  Operational simplicity and not making security the user’s problem is a beautiful thing.

Here is a video of Simon and my session “Secure Everything.

What’s truly of interest to me — and based on what Simon said yesterday — the application of this approach could be just at home in a “server,” cloud or mobile application as it is on a classical desktop environment.  There are certainly dependencies (such as VT) today, but the notion that we can leverage virtualization for better resilience, survivability and assurance for more “trustworthy” systems is exciting.

I for one am very excited to see how we’re progressing from “bolt on” to more integrated approaches in our security models. This will bear fruit as we become more platform and application-centric in our approach to security, allowing us to leverage fundamentally “elemental” security components to allow for more meaningfully trustworthy computing.

/Hoff

* The range of topics was rather hysterical; from the Byzantine General’s problem to K/T Boundary extinction-class events to the Mexican/U.S. border fence, it was chock full of analogs 😉

 

Enhanced by Zemanta

Bridging the Gap Between Devs & Security – A Collaborative Suggestion…

May 23rd, 2012 3 comments

After my keynote at Gluecon (Shit My Cloud Evangelist Says…Just Not To My CSO,) I was asked by an attendee what things he could do within his organization to repair the damage and/or mistrust between developers and security organizations in enterprises.

Here’s what I suggested based on past experience:

  1. Reach out and have a bunch of “brown bag lunches” wherein you host-swap each week; devs and security folks present to one another with relevant, interesting or new solutions in their respective areas
  2. Pick a project that takes a yet-to-be-solved interesting business challenge that isn’t necessarily on the high priority project list and bring the dev and security teams together as if it were an actual engagement.

Option 1 starts the flow of information.  Option 2 treats the project as if it were high priority but allows security and dev to work together to talk about platform choices, management, security, etc. and because it’s not mission critical, mistakes can be made and learned from…together.

For example, pick something like building a new app service that uses node.js and MongoDB and figure out how to build, deploy and secure it…as if you were going to deploy to public cloud from day one (and maybe you will.)

You’ll be amazed to see the trust it builds, especially in light of developers enrolling security in their problem and letting them participate from the start versus being the speed bump later.

10 minutes later it’ll be a DevOps love-fest. 😉

/Hoff

 

Enhanced by Zemanta

Incomplete Thought: On Horseshoes & Hand Grenades – Security In Enterprise Virt/Cloud Stacks

May 22nd, 2012 7 comments

It’s not really *that* incomplete of a thought, but I figure I’d get it down on vPaper anyway…be forewarned, it’s massively over-simplified.

Over the last five years or so, I’ve spent my time working with enterprises who are building and deploying large scale (relative to an Enterprise’s requirements, that is) virtualized data centers and private cloud environments.

For the purpose of this discussion, I am referring to VMware-based deployments given the audience and solutions I will reference.

To this day, I’m often shocked with regard to how many of these organizations that seek to provide contextualized security for intra- and inter-VM traffic seem to position an either-or decision with respect to the use of physical or virtual security solutions.

For the sake of example, I’ll reference the architectural designs which were taken verbatim from my 2008 presentationThe Four Horsemen of the Virtualization Security Apocalypse.

If you’ve seen/read the FHOTVA, you will recollect that there are many tradeoffs involved when considering the use of virtual security appliances and their integration with physical solutions.  Notably, an all-virtual or all-physical approach will constrain you in one form or another from the perspective of efficacy, agility, and the impact architecturally, operationally, or economically.

The topic that has a bunch of hair on it is where I see many enterprises trending: obviating virtual solutions and using physical appliances only:

 

…the bit that’s missing in the picture is the external physical firewall connected to that physical switch.  People are still, in this day and age, ONLY relying on horseshoeing all traffic between VMs (in the same or different VLANs) out of the physical cluster machine and to an external firewall.

Now, there are many physical firewalls that allow for virtualized contexts, zoning, etc., but that’s really dependent upon dumping trunked VLAN ports from the firewall/switches into the server and then “extending” virtual network contexts, policies, etc. upstream in an attempt to flatten the physical/virtual networks in order to force traffic through a physical firewall hop — sometimes at layer 2, sometimes at layer 3.

It’s important to realize that physical firewalls DO offer benefits over the virtual appliances in terms of functionality, performance, and some capabilities that depend on hardware acceleration, etc. but from an overall architectural positioning, they’re not sufficient, especially given the visibility and access to virtual networks that the physical firewalls often do not have if segregated.

Here’s a hint, physical-only firewall solutions alone will never scale with the agility required to service the virtualized workloads that they are designed to protect.  Further, a physical-only solution won’t satisfy the needs to dynamically provision and orchestrate security as close to the workload as possible, when the workloads move the policies will generally break, and it will most certainly add latency and ultimately hamper network designs (both physical and virtual.)

Virtual security solutions — especially those which integrate with the virtualization/cloud stack (in VMware’s case, vCenter & vCloud Director) — offer the ability to do the following:

…which is to say that there exists the capability to utilize  virtual solutions for “east-west” traffic and physical solutions for “north-south” traffic, regardless of whether these VMs are in the same or different VLAN boundaries or even across distributed virtual switches which exist across hypervisors on different physical cluster members.

For east-west traffic (and even north-south models depending upon network architecture) there’s no requirement to horseshoe traffic physically. 

It’s probably important to mention that while the next slide is out-of-date from the perspective of the advancement of VMsafe APIs, there’s not only the ability to inject a slow-path (user mode) virtual appliance between vSwitches, but also utilize a set of APIs to instantiate security policies at the hypervisor layer via a fast path kernel module/filter set…this means greater performance and the ability to scale better across physical clusters and distributed virtual switching:

Interestingly, there also exists the capability to actually integrate policies and zoning from physical firewalls and have them “flow through” to the virtual appliances to provide “micro-perimeterization” within the virtual environment, preserving policy and topology.

There are at least three choices for hypervisor management-integrated solutions on the market for these solutions today:

  • VMware vShield App,
  • Cisco VSG+Nexus 1000v and
  • Juniper vGW

Note that the solutions above can be thought of as “layer 2” solutions — it’s a poor way of describing them, but think “inter-VM” introspection for workloads in VLAN buckets.  All three vendors above also have, or are bringing to market, complementary “layer 3” solutions that function as virtual “edge” devices and act as a multi-function “next-hop” gateway between groups of VMs/applications (nee vDC.)  For the sake of brevity, I’m omitting those here (they are incredibly important, however.)

They (layer 2 solutions) are all reasonably mature and offer various performance, efficacy and feature set capabilities. There are also different methods for plumbing the solutions and steering traffic to them…and these have huge performance and scale implications.

It’s important to recognize that the lack of thinking about virtual solutions often seem to be based largely on ignorance of need and availability of solutions.

However, other reasons surface such as cost, operational concerns and compliance issues with security teams or assessors/auditors who don’t understand virtualized environments well enough.

From an engineering and architectural perspective, however, obviating them from design consideration is a disappointing concern.

Enterprises should consider a hybrid of the two models; virtual where you can, physical where you must.

If you’ve considered virtual solutions but chose not to deploy them, can you comment on why and share your thinking with us (even if it’s for the reasons above?)

/Hoff

Enhanced by Zemanta

Overlays: Wasting Away Again In Abstractionville…

May 5th, 2012 3 comments
IBM Cloud Computing

(Photo credit: IvanWalsh.com)

 

I’m about to get in a metal tube and spend 14 hours in the Clouds.  I figured I’d get something off my chest while I sit outside in the sun listening to some Jimmy Buffett.

[Network] overlays.  They bug me.  Let me tell you why.

The Enterprise, when considering “moving to the Cloud” generally takes one of two approaches depending upon culture, leadership, business goals, maturity and sophistication:

  1. Go whole-hog with an all-in Cloud strategy. 
    Put an expiration date on maintaining/investing in legacy apps/infrastructure and instead build an organizational structure, technology approach, culture, and operational model that is designed around building applications that are optimized for “cloud” — and that means SaaS, PaaS, and IaaS across public, private and hybrid models with a focus on how application delivery and information (including protecting) is very different than legacy deployments, or…
  2. Adopt a hedging strategy to get to Cloud…someday.
    This usually means opportunistically picking low risk, low impact, low-hanging fruit that can be tip-toed toward and scraping together the existing “rogue” projects already underway, sprinkling in some BYOD, pointing to a virtualized datacenter and calling a 3 day provisioning window with change control as “on-demand,” and “Cloud.”  Oh, and then deploying gateways, VPNs, data encryption and network overlays as an attempt to plug holes by paving over them, and calling that “Cloud,” also.

See that last bit?

This is where so-called “software defined networking (SDN),” the myriad of models that utilize “virtualization” and all sorts of new protocols and service delivery mechanisms are being conflated into the “will it blend” menagerie called “Cloud.”  It’s an “eyes wide shut” approach.

Now, before you think I’m being dismissive of “virtualization” or SDN, I’m not.  I believe. Wholesale. But within the context of option #2 above, it’s largely a waste of time, money, and effort.  It’s putting lipstick on a pig.

You either chirp or get off the twig.

Picking door #2 is where the Enterprise looks at shiny new things based on an article in the WSJ, Wired or via peer group golf outing and says “I bet if we added yet another layer of abstraction atop the piles of already rapidly abstracting piles of shite we already have, we would be more agile, nimble, efficient and secure.”  We would be “cloud” enabled.

[To a legacy-minded Enterprise,] Cloud is the revenge of VPN and PKI…

The problem is that just like the folks in Maine will advise: “You can’t get there from here.”  I mean, you can, but the notion that you’ll actually pull it off by stacking turtles, applying band-aids and squishing the tyranny of VLANs by surrounding them in layer 3 network overlays and calling this the next greatest thing since sliced bread is, well, bollocks.

Look, I think SDN, protocols like Openflow and VXLAN/NVGRE, etc. are swell.  I think the separation of control and data planes and the notion that I can programmatically operate my network is awesome.  I think companies like Nicira and Bigswitch are doing really interesting things.  I think that Cloudstack, Openstack and VMWare present real opportunity to make things “better.”

Hey, look, we’re just like Google and Amazon Web Services Now!

But to an Enterprise without a real plan as to what “Cloud” really means to their business, these are largely overlays within the context of #2.  Within the context of #1, they’re simply mom and apple pie and are, for the most part, invisible.  That’s not where the focus actually is.

That said, for a transitional Enterprise, these things give them pause, but should be looked upon as breadcrumbs that indicate a journey, not the destination.  They’re a crutch and another band-aid to solve legacy problems.  They’re really a means to an end.

These “innovations” *are* a step in the right direction.  They will let us do great things. They will let a whole new generation of operational models and a revitalized ecosystem flourish AND it will encourage folks to think differently.  But about what?  And to solve what problem(s)?

If you simply expect to layer them on your legacy infrastructure, operational models and people and call it “Cloud,” you’re being disingenuous.

Ultimately, to abuse an analogy, network overlays are a layover on the itinerary of our journey to the Cloud, but not where we should ultimately land. I see too many companies focusing on the transition…and by the time they get there, the target will have moved.  Again.  Just like it always does.

They’re hot now because they reflect something we should have done a long time ago, but like hypervisors, one day [soon] network overlays will become just a feature and not a focus.

/Hoff

 

Enhanced by Zemanta

Tin Foil Hats: On BBQ Brisket & Security Purists…

April 14th, 2012 5 comments
Tony Bourdain

Tony Bourdain (Photo credit: Wikipedia)

I’ve always enjoyed Anthony Bourdain‘s antics.

When I first encountered him on FoodTV, he was busy digesting the remnants of some sad mammal whilst commentating appropriately with grease-stained chin and mumbling narrative, extolling the virtues of the roadside “chef” who’d managed to handily hose the crap out of the wrong end of the deep-fried duodenum he was consuming.

I’ve furthered my appreciation for his unique style of ex-crackhead edginess, and enjoyed greatly his visceral verbiage as I devoured chapter after chapter of his books.

I’ve watched his numerous TV series, chortling in glee as he gently dropped bleeped-out F-Bombs, lambasted his producers on all topics imaginable, and struggled not to lose his foie-gras overboard when his check-writers sent him boating.

Good times.

Oh, I follow him on Twitter also, as I’ve come to find his little quips quite amusing, as expected.

However…

Yesterday, he went batshit crazy and started ranting about something that someone else I admire greatly, Steven Raichlen, innocently mentioned with regard to BBQ.

Brisket, to be specific. The holiest of holies in the BBQ world, especially if you’re from that oddly-shaped, but giant state of Texas.

Holy shit.  This wasn’t going to end well.

I braced myself for the impact.

Basically, Raichlen was discussing the process, the Texas Crutch, in which upon a stall — the point wherein the collagen fails to continue to convert to gelatin because the temperature has reached an eponymous point in its cooking cycle wherein it refuses to budge — where one wraps it in foil to encourage it along some.

It’s really not that big a deal.

It’s not something I do often. It’s not something I even prefer to do. It’s something, when things just aren’t going my way and the Bourbon’s not helping, that I begrudgingly force upon my favorite bovine by-product.  It usually helps and I ultimately unwrap it to allow the bark to crisp back up before becoming a black soggy mess resembling (and tasting) like a mushy, peaty bog.

THAT, it occurred to me, was Bourdain’s real complaint — or so I thought.  He held in disdain the mismanagement of the process which would end up with an external, texturally-offensive crust.

I was wrong. Bourdain, it would unfold, accuses the entire process violation as something as impure as defiling a religious artifact, all the while missing the point that it is, by definition and title, generally done as a “crutch.”

He pushed forward, ignoring the contrariety, and rallied his culinary gendarme.  He even managed to pull a “Crazy Ivan” and suggest that this sort of unpalatable madness was as evil as the now-trendy sous-vide that the top-players in the industry were all now cursing at in symphony.  Many a slow-cooking, low temperature water bath shed a tear this day.

He righted HMS MadCow and then pratted on deliriously, desperately whipping up a frenzy, furiously retweeting supporters of his cause. The folks from Modernist Cuisine piped up. So did other zealots from the no foil camp. It seems that everyone who quipped was positioned behind their computers, burning mesquite, oak or hickory smudges, chanting rub recipes,  whilst they sharpened their pitchforks and thongs.

Ultimately, and by name, he then called upon the Sorcerer himself, Alton Brown, for backup.

However, Monsieur Brown, being the scientific fellow he is and not one to engage in “faith-based cookery,” simply replied back with a common sense evaluation of “foil-gate in which he simply stated this was a matter of choice and preferred outcome.

Specifically, he mused, if one wanted more smoky, wood-imbued BBQ-flavor, don’t do the Crutch and deal with the added cooking time which can often lead to dryness.  On the other hand, if one wanted moist brisket, go with the “Crutch” and use the braise method.  He did, rather correctly, also note that “Real brisket (meaning Texas) is not like any other barbecue.”

Like, duh.  But I’m not really sure that was Raichlen’s point in the first place.

Alton took the high road, but many others who would not have it joined the fray, frothing at the very thought of things like foil or injected “enhancers” such as beef broth. It seemed there was no place for common sense or scenarios tuned for alternative outcomes in the world of BBQ Brisket.

Or was there?

Others, like myself, simply blinked at the ensuing religious fervor with a mixture of bemusement and redress, shrugged incredulously and then chuckled when many of the very same naysayers went on to suggest that techniques  such as foil and broth injection should only be utilized in and saved for “competition.”

You know, “competition,” wherein the product judged as the “best” amongst many is often produced with things like beef broth injection and tin foil crutching.

So purity, it seems, goes right out the window (or BBQ pit) when one is trying to win a BBQ contest, an argument or a popularity contest.  Especially on the Internet.

I’m going to leave it to you to connect the ribs between this debate wherein “good enough” and “perfect” are ridiculously traded off and determine why I find such parallels deliciously ironic between BBQ and Security purists.

Suffice it to say, there are a lot of backseat “pitmasters” who will often tell you about “perfect” but likely can’t tell the difference between the creation of a smoke ring and blowing one.

Tin Foil hats, it seems, are equally as contentious (and funny) on the BBQ circuit as they are in the Security Circus.

I’ma let you finish, but my Backwoods is calling.  I’m gonna go unwrap my brisket.  Enjoy your tofu.

/Beaker

P.S. I left out many of the juicy bits from the argument, but I think it’s best summarized by following tweet:

Or: “Outcomes: Reason, not religion.”

Enhanced by Zemanta

Incomplete Thought: Will the Public Cloud Create a Generation Of Network Stupid?

March 26th, 2012 31 comments

Short and sweet…

With the continued network abstraction and “simplicity” presented by public cloud platforms like AWS EC2* wherein instances are singly-homed and the level of networking is so dumbed down so as to make deep networking knowledge “unnecessary,” will the skill sets of next generation operators become “network stupid?”

The platform operators will continue to hire skilled network architects, engineers and operators, but the ultimate consumers of these services are being sold on the fact that they won’t have to and in many cases this means that “networking” as a discipline may face a skills shortage.

The interesting implications here is that with all this abstraction and opaque stacks, resilient design is still dependent upon so much “networking” — although much of it is layer 4 and above.  Yep, it’s still TCP/IP, but the implications that the dumbing down of the stack will be profound, especially if one recognizes that ultimately these Public clouds will interconnect to Private clouds, and the two networking models are profoundly differentiated.

…think VMware versus AWS EC2…or check out the meet-in-the-middle approach with OpenStack and Quantum…

I’m concerned that we’re still so bifurcated in our discussions of networking and the Cloud.

One the one hand we’re yapping at one another about stretched L2 domains, fabrics and control/data plane separation or staring into the abyss of L7 proxies and DPI…all the while the implications of SDN and emergence of new protocols, the majority of which are irrelevant to the consumers deploying VMs and apps atop IaaS and PaaS (not to mention SaaS,) makes these discussions seem silly.

On the other hand, DevOps/NoOps folks push their code to platforms that rely less and less on needing to understand or care how the underlying “network” works.

Its’ hard to tell whether “networking” in the pure sense will be important in the long term.

Or as Kaminsky so (per usual) elegantly summarized:

What are your thoughts?

/Hoff

*…and yet we see more “complex” capabilities emerging in scenarios such as AWS VPC…

 

Enhanced by Zemanta
Categories: Cloud Computing, Networking Tags:

Security As A Service: “The Cloud” & Why It’s a Net Security Win

March 19th, 2012 3 comments
Cloud Computing Image

Cloud Computing Image (Photo credit: Wikipedia)

If you’ve been paying attention to the rash of security startups entering the market today, you will no doubt notice the theme wherein the majority of them are, from the get-go, organizing around deployment models which operate from “The Cloud.”

We can argue that “Security as a service” usually refers to security services provided by a third party using the SaaS (software as a service) model, but there’s a compelling set of capabilities that enables companies large and small to be both effective, efficient and cost-manageable as we embrace the “new” world of highly distributed applications, content and communications (cloud and mobility combined.)

As with virtualization, when one discusses “security” and “cloud computing,” any of the three perspectives often are conflated (from my post “Security: In the Cloud, For the Cloud & By the Cloud…“):

In the same way that I differentiated “Virtualizing Security, Securing Virtualization and Security via Virtualization” in my Four Horsemen presentation, I ask people to consider these three models when discussing security and Cloud:

  1. In the Cloud: Security (products, solutions, technology) instantiated as an operational capability deployed within Cloud Computing environments (up/down the stack.) Think virtualized firewalls, IDP, AV, DLP, DoS/DDoS, IAM, etc.
  2. For the Cloud: Security services that are specifically targeted toward securing OTHER Cloud Computing services, delivered by Cloud Computing providers (see next entry) . Think cloud-based Anti-spam, DDoS, DLP, WAF, etc.
  3. By the Cloud: Security services delivered by Cloud Computing services which are used by providers in option #2 which often rely on those features described in option #1.  Think, well…basically any service these days that brand themselves as Cloud… ;)

What I’m talking about here is really item #3; security “by the cloud,” wherein these services utilize any cloud-based platform (SaaS, PaaS or IaaS) to delivery security capabilities on behalf of the provider or ultimate consumer of services.

For the SMB/SME/Branch, one can expect a hybrid model of on-premises physical (multi-function) devices that also incorporate some sort of redirect or offload to these cloud-based services. Frankly, the same model works for the larger enterprise but in many cases regulatory issues of privacy/IP concerns arise.  This is where the capability of both “private” (or dedicated) versions of these services are requested (either on-premises or off, but dedicated.)

Service providers see a large opportunity to finally deliver value-added, scaleable and revenue-generating security services atop what they offer today.  This is the realized vision of the long-awaited “clean pipes” and “secure hosting” capabilities.  See this post from 2007 “Clean Pipes – Less Sewerage or More Potable Water?”

If you haven’t noticed your service providers dipping their toes here, you certainly have seen startups (and larger security players) do so.  Here are just a few examples:

  • Qualys
  • Trend Micro
  • Symantec
  • Cisco (Ironport/ScanSafe)
  • Juniper
  • CloudFlare
  • ZScaler
  • Incapsula
  • Dome9
  • CloudPassage
  • Porticor
  • …and many more

As many vendors “virtualize” their offers and start to realize that through basic networking, APIs, service chaining, traffic steering and security intelligence/analytics, these solutions become more scaleable, leveragable and interoperable, the services you’ll be able to consume will also increase…and they will become more application and information-centric in nature.

Again, this doesn’t mean the disappearance of on-premises or host-based security capabilities, but you should expect the cloud (and it’s derivative offshoots like Big Data) to deliver some really awesome hybrid security capabilities that make your life easier.  Rich Mogull (@rmogull) and I gave about 20 examples of this in our “Grilling Cloudicorns: Mythical CloudSec Tools You Can Use Today” at RSA last month.

Get ready because while security folks often eye “The Cloud” suspiciously, it also offers up a set of emerging solutions that will undoubtedly allow for more efficient, effective and affordable security capabilities that will allow us to focus more on the things that matter.

/Hoff

Related articles by Zemanta

Enhanced by Zemanta

SEO Twitter: The Emotion of Self-Promotion…

March 19th, 2012 5 comments

My buddy Bill Brenner (@billbrenner70) blogged a question that stemmed from a “discussion” I seem to have initiated yesterday: “Do People In Security Blog Too Much?

He was kind enough to accommodate a clarification from me in which I reiterated that my chief complaint regarding excessive self-promotion by individuals  was “not about volume, but variety.”

To be clear, RT’ing a link (however modified) that is clearly designed to self-promote onesself is, in my opinion, bordering on SPAM-like behavior when one does it 10+ times in a 24 hour period.

I don’t mind a lot of tweets.  I mind a lot of the same tweets.

…The same way people get annoyed with folks who live tweet conferences, I suppose.

Now, people have the right to tweet whatever they like, as often as they like, but the reason I brought this up was because I was truly interested in whether or not the individual in question understood the impact/annoyance it caused.

Based on his reply, the “data he had to suggest ‘increased engagement,’ and what was clearly a strategy behind this activity, it became apparent he didn’t.

So I did what anyone in my position has the option to do: I unfollowed.  This was followed by an additional comment from the author that only “…~0.1% of followers had a negative response” to his RT’ing [approximately 5/4200 people.]

I found that odd, since I had at least 10 DM’s in my mailbox from followers who reacted to my tweets surrounding this issue.

5 or so others then piped up suggesting they were also annoyed but, like me, had not said anything.

As I mentioned, I wasn’t looking for anything like an apology — it’s not my place to, nor am I arrogant enough to suggest I’m owed one — but I did want him to understand that there were ramifications that either he was unaware of or simply ignoring.  Again, his choice.

I probably *do* tweet too much for many people’s likes — and they unfollow accordingly.  However, I operate under the “code” that I try very hard to not RT anything self-promotional more than TWICE in a 24 hour period.  I figure that with timezone deltas, but with RSS feeds and other RT’s from interested parties, that’s sufficient.

Am I potentially missing people?  Sure.  But the way I look at it is that if it’s interesting enough, people will find it.

I’m not in the “business” of “SEO for Twitter” (h/t to @SecureTom for the phrase,) but that’s a personal choice.

I will suggest, however, that people are smarter than many give them credit for — you can get cute and change the preamble, but if you deluge their timeline with self-promotion, expect them to one day get grumpy enough to find the unfollow button…and use it.

/Hoff

 

Enhanced by Zemanta