Archive

Posts Tagged ‘Twitter’

The Soylent Green of “Epic Hacks” – It’s Made of PEOPLE!

August 7th, 2012 3 comments

Allow me to immediately state that I am, in no way, attempting to blame or shame the victim in my editorial below.

However, the recent rash of commentary from security wonks on Twitter and blogs regarding who is to “blame” in Mat Honan’s unfortunate experience leaves me confused and misses an important point.

Firstly, the title of the oft-referenced article documenting the series of events is at the root of my discontent:

How Apple and Amazon Security Flaws Led to My Epic Hacking

As I tweeted, my assessment and suggestion for a title would be:

How my poor behavior led to my epic hacking & flawed trust models & bad luck w/Apple and Amazon assisted

…especially when coupled with what is clearly an admission by Mr. Honan, that he is, fundamentally, responsible for enabling the chained series of events that took place:

In the space of one hour, my entire digital life was destroyed. First my Google account was taken over, then deleted. Next my Twitter account was compromised, and used as a platform to broadcast racist and homophobic messages. And worst of all, my AppleID account was broken into, and my hackers used it to remotely erase all of the data on my iPhone, iPad, and MacBook.

In many ways, this was all my fault. My accounts were daisy-chained together. Getting into Amazon let my hackers get into my Apple ID account, which helped them get into Gmail, which gave them access to Twitter. Had I used two-factor authentication for my Google account, it’s possible that none of this would have happened, because their ultimate goal was always to take over my Twitter account and wreak havoc. Lulz.

Had I been regularly backing up the data on my MacBook, I wouldn’t have had to worry about losing more than a year’s worth of photos, covering the entire lifespan of my daughter, or documents and e-mails that I had stored in no other location.

Those security lapses are my fault, and I deeply, deeply regret them.

The important highlighted snippets above are obscured by the salacious title and the bulk of the article which focuses on how services — which he enabled and relied upon — however flawed certain components of that trust and process may have been, are *really* at the center of the debate here.  Or ought to be.

There’s clearly a bit of emotional transference occurring.  It’s easier to associate causality with a faceless big corporate machine rather than swing the light toward the victim, even if he, himself, self-identifies.

Before you think I’m madly defending and/or suggesting that there weren’t breakdowns with any of the vendors — especially Apple — let me assure you I am not.  There are many things that can and should be addressed here, but leaving out the human element, the root of it all here, is dangerous.

I am concerned that as a community there is often an aire of suggestion that consumers are incapable and inculpable with respect to understanding the risks associated with the clicky-clicky-connect syndrome that all of these interconnected services brings.

People give third party applications and services unfettered access to services like Twitter and Facebook every day — even when messages surrounding the potential incursion of privacy and security are clearly stated.

When something does fail — and it does and always will — we vilify the suppliers (sometimes rightfully so for poor practices) but we never really look at what we need to do to prevent having to see this again: “Those security lapses are my fault, and I deeply, deeply regret them.”

The more interconnected things become, the more dependent upon flawed trust models and the expectations that users aren’t responsible we shall be.

This is the point I made in my presentations: Cloudifornication and Cloudinomicon.

There’s a lot of interesting discussion regarding the effectiveness of security awareness training.  Dave Aitel started a lively one here: “Why you shouldn’t train employees for security awareness

It’s unfortunate the the only real way people learn is through misfortune, and any way you look at it, that’s the thing that drives awareness.

There are many lessons we can learn from Mr. Honan’s unfortunate experience…I urge you to consider less focusing blame on one link in the chain and instead guide the people you can influence to reconsider decisions of convenience over the potential tradeoffs they incur.

/Hoff

P.S. For you youngsters who don’t get the Soylent Green reference, see here.  Better yet, watch it. It’s awesome. Charlton Heston, FTW.

P.P.S. (Check out the sentiment of all the articles below)

Enhanced by Zemanta

SEO Twitter: The Emotion of Self-Promotion…

March 19th, 2012 5 comments

My buddy Bill Brenner (@billbrenner70) blogged a question that stemmed from a “discussion” I seem to have initiated yesterday: “Do People In Security Blog Too Much?

He was kind enough to accommodate a clarification from me in which I reiterated that my chief complaint regarding excessive self-promotion by individuals  was “not about volume, but variety.”

To be clear, RT’ing a link (however modified) that is clearly designed to self-promote onesself is, in my opinion, bordering on SPAM-like behavior when one does it 10+ times in a 24 hour period.

I don’t mind a lot of tweets.  I mind a lot of the same tweets.

…The same way people get annoyed with folks who live tweet conferences, I suppose.

Now, people have the right to tweet whatever they like, as often as they like, but the reason I brought this up was because I was truly interested in whether or not the individual in question understood the impact/annoyance it caused.

Based on his reply, the “data he had to suggest ‘increased engagement,’ and what was clearly a strategy behind this activity, it became apparent he didn’t.

So I did what anyone in my position has the option to do: I unfollowed.  This was followed by an additional comment from the author that only “…~0.1% of followers had a negative response” to his RT’ing [approximately 5/4200 people.]

I found that odd, since I had at least 10 DM’s in my mailbox from followers who reacted to my tweets surrounding this issue.

5 or so others then piped up suggesting they were also annoyed but, like me, had not said anything.

As I mentioned, I wasn’t looking for anything like an apology — it’s not my place to, nor am I arrogant enough to suggest I’m owed one — but I did want him to understand that there were ramifications that either he was unaware of or simply ignoring.  Again, his choice.

I probably *do* tweet too much for many people’s likes — and they unfollow accordingly.  However, I operate under the “code” that I try very hard to not RT anything self-promotional more than TWICE in a 24 hour period.  I figure that with timezone deltas, but with RSS feeds and other RT’s from interested parties, that’s sufficient.

Am I potentially missing people?  Sure.  But the way I look at it is that if it’s interesting enough, people will find it.

I’m not in the “business” of “SEO for Twitter” (h/t to @SecureTom for the phrase,) but that’s a personal choice.

I will suggest, however, that people are smarter than many give them credit for — you can get cute and change the preamble, but if you deluge their timeline with self-promotion, expect them to one day get grumpy enough to find the unfollow button…and use it.

/Hoff

 

Enhanced by Zemanta

Flying Cars & Why The Hypervisor Is A Ride-On Lawnmower In Comparison

September 23rd, 2011 18 comments

I wrote a piece a while ago (in 2009) titled “Virtual Machines Are Part Of the Problem, Not the Solution…” in which I described the fact that hypervisors, virtualization and the packaging that supports them — Virtual Machines (VMs) — were actually kludges.

Specifically, VMs still contain the bloat (nee: cancer) that are operating systems and carry forward all of the issues and complexity (albeit now with more abstraction cowbell) that we already suffer.  Yes, it brings a lot of GOOD stuff, too, but tolerate the analog for a minute, m’kay.

Moreover, the move in operational models such as Cloud Computing (leveraging the virtualization theme) and the up-stack crawl from IaaS to PaaS (covered also in a blog I wrote titled: Silent Lucidity: IaaS – Already A Dinosaur?) seems to indicate a general trending toward a reduction in the number of layers in the overall compute stack.

Something I saw this morning reminded me of this and its relation to how the evolution and integration of various functions — such as virtualization and security — directly into CPUs themselves are going to dramatically disrupt how we perceive and value “virtualization” and “cloud” in the long run.

I’m not going to go into much detail because there’s a metric crapload of NDA type stuff associated with the details, but I offer you this as something you may have already thought about and the industry is gingerly crawling toward across multiple platforms.  You’ll have to divine and associate the rest:

Think “Microkernels”

…and in the immortal words of Forrest Gump “That’s all I’m gonna say ’bout that.”

/Hoff

* Ray DePena humorously quipped on Twitter that “…the flying car never materialized,” to which I retorted “Incorrect. It has just not been mass produced…” I believe this progression will — and must — materialize.

Enhanced by Zemanta

Cloud Computing, Open* and the Integrator’s Dilemma

April 11th, 2011 4 comments

My esteemed co-tormentor of Twitter, Christian Reilly (@reillyusa,) did a fantastic job of describing the impact — or more specifically the potential lack thereof — of Facebook’s OpenCompute initiative on the typical enterprise as compared to the real target audience, the service provider and manufacturers of equipment for service providers:

…I genuinely believe that for traditional service providers who are making investments in new areas and offerings, XaaS providers, OEM hardware vendors and those with plans to become giants in the next generation(s) of Systems Integrators that the OpenCompute project is a huge step forward and will be a fantastic success story over the next few years as the community and its innovations grow and tangible benefits emerge.

I think Christian has it dead on; the trickle-down effect with large service providers leveraging innovation in facilities and compute construction looking to squeeze maximum cost efficiencies (based on power, density, cooling, and space efficiency) from their services will be good for everyone, but that it’s quite important to recognize why and how:

…consider that today’s public cloud services and co-location providers are today’s equivalent of commercial airlines, providing their own multi-tenant services, price structures and user experiences on top of just a handful of airframe and engine manufacturers. OpenCompute has the potential to influence the efficiency and effectiveness of those manufacturers by helping to contribute towards ideas and potentially standards that can be adopted across the industry.

Specific to the adoption of OpenCompute as an enterprise blueprint, he widened the bifurcation between “private clouds operated by service providers as public clouds” (my words) and “private clouds operated by enterprises for their own use” with a telling analog:

Bottom line ? To today’s large corporate IT shops; those who either have, or will continue to operate on-premise or co-located “private cloud” environments, the excitement levels around the OpenCompute project (if anyone actually hears of it at all) will be all-to-familiarly low as sadly, to wake some of these sleeping giants, it will take more than a poke from the very same company who’s website their IT teams are trying to prevent employees from accessing.

This is the point of departure for OpenCompute — it’s not framed for or designed for enterprise consumption.  In an altogether fascinating description of why Facebook open-sourced its data center design, the Huffington Post summarized it thus:

“[The Open Compute Project] really is a big deal because it constitutes a general shift in terms of what how we look at technology as a competitive advantage,” O’Grady said. “For Facebook, the evidence is piling up that they don’t consider technology to be a competitive advantage. They view their competitive advantage in the marketplace to be their users.”

Here we see the general abstraction of technology in line with Nick Carr’s premise that “IT Doesn’t Matter:”

“Sharing its blueprints may gain Facebook not only free manpower, but cheaper equipment. The company’s bet, analysts say, is that giving away intellectual property will help it foster an ecosystem of competing vendors that will drive down the cost of parts.”

With that in mind, I am just as worried about the fate of OpenStack and its enterprise versus service provider audience and how it’s being perceived as they watch the mad scramble by tech companies to add value and get a seat at the table.

Each of these well-intentioned projects are curated by public cloud operators and technology vendors and are indirectly positioned for the benefit of enterprises, but not really meant for their consumption — at least not if they don’t end up putting enterprises right back where they were trying to escape from in the first place with cloud computing: the integrator’s dilemma.

If you look at the underlying premise of OpenStack — it’s modularity, flexibility and open design — what you get is the ability to craft a solution finely tuned to an operating environment of your design. Integrate solutions into the stack as you see fit.  Contribute code.  Develop an ecosystem. Integrate, manage, maintain…

This is as much a problem as it is a solution for an enterprise.  This is why, in many cases, enterprises choose to use a single vendor with a single neck to choke in order to avoid having to act as an integrator in the first place or simply look to outsource to one or more public cloud providers and avoid this in the first place.

Chances are, most are realistically caught up somewhere in the nether-regions in between the two.

I wish to make it clear that I am very much a proponent of Open* but I realize that the lack of direct enterprise involvement in standards bodies, “open” initiatives and a lack of information sharing and experience for fear of losing competitive advantage is what drives enterprises to Closed* in the first place; they want to lessen their developmental and integration burdens and the Lego erector-set approach in many ways scares conservative, risk-averse CxO’s away from projects like this.

I think this is where we’ll see more of these “clouds in a box” being paired with managed services to keep it all humming, regardless of where it lives. [See infrastructure solutions from: Dell, VCE, HP, Oracle, etc. paired with “Cloud” distributions layered atop]

Let’s hope we see enterprise success stories built on leveraging OpenCompute and OpenStack…it will be good for all of us.

/Hoff

Update: I just saw that my colleague, James Urquhart, wrote a blog titled “Cloud disrupts, creates channel opportunities” in which he details the channel’s role in this integration challenge. Spot on.

Related articles

Enhanced by Zemanta

Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit…

April 5th, 2011 6 comments

My wife is in the midst of an extended multi-phasic, multi-day delivery process of our fourth child.  In between bouts of her moaning, breathing and ultimately sleeping, I’m left to taunt people on Twitter and think about Cloud.

Reviewing my hot-button list of terms that are annoying me presently, I hit upon a favorite: Cloudbursting.

It occurred to me that this term brings up a visceral experience that makes me want to punch kittens.  It’s used by people to describe a use case in which workloads that run first and foremost within the walled gardens of an enterprise, magically burst forth into public cloud based upon a lack of capacity internally and a plethora of available capacity externally.

I call bullshit.

Now, allow me to qualify that statement.

Ben Kepes suggests that cloud bursting makes sense to an enterprise “Because you’ve spent a gazillion dollars on on-prem h/w that you want to continue using. BUT your workloads are spiky…” such that an enterprise would be focused on “…maximizing returns from on-prem. But sending excess capacity to the clouds.”  This implies the problem you’re trying to solve is one of scale.

I just don’t buy this.

Either you build a private cloud that gives you the scale you need in the first place in which you pattern your operational models after public cloud and/or design a solid plan to migrate, interconnect or extend platforms to the public [commodity] cloud using this model, therefore not bursting but completely migrating capacity, but you don’t stop somewhere in the middle with the same old crap internally and a bright, shiny public cloud you “burst things to” when you get your capacity knickers in a twist:

The investment and skillsets needed to rectify two often diametrically-opposed operational models doesn’t maximize returns, it bifurcates and diminishes efficiencies and blurs cost allocation models making both internal IT and public cloud look grotesquely inaccurate.

Christian Reilly suggested I had no legs to stand on making these arguments:

Fair enough, but…

Short of workloads such as HPC in which scale really is a major concern, if a large enterprise has gone through all of the issues relevant to running tier-1 applications in a public cloud, why on earth would you “burst” to the public cloud versus execute on a strategy that has those workloads run there in the first place.

Christian came up with another ringer during this exchange, one that I wholeheartedly agree with:

Ultimately, the reason I agree so strongly with this is because of the architectural, operational and compliance complexity associated with all the mechanics one needs to allow for interoperable, scaleable, secure and manageable workloads between an internal enterprise’s operational domain (cloud or otherwise) and the public cloud.

The (in)ability to replicate capabilities exactly across these two models means that gaps arise — gaps that unfairly amplify the immaturity of cloud for certain things and it’s stellar capabilities in others.  It’s no wonder people get confused.  Things like security, networking, application intelligence…

NOTE: I make a wholesale differentiaton between a strategy that includes a structured hybrid cloud approach of controlled workload placement/execution versus  a purely overflow/capacity movement of workloads.*

There are many workloads that simply won’t or can’t *natively* “cloudburst” to public cloud due to a lack of supporting packaging and infrastructure.**  Some of them are pretty important.  Some of them are operationally mission critical. What then?  Without an appropriate way of understanding the implications and complexity associated with this issue and getting there from here, we’re left with a strategy of “…leave those tier-1 apps to die on the vine while we greenfield migrate new apps to public cloud.”  That doesn’t sound particularly sexy, useful, efficient or cost-effective.

There are overlay solutions that can allow an enterprise to leverage utility computing services as an abstracted delivery platform and fluidly interconnect an enterprise with a public cloud, but one must understand what’s involved architecturally as part of that hybrid model, what the benefits are and where the penalties lay.  Public cloud needs the same rigor in its due diligence.

[update] My colleague James Urquhart summarized well what I meant by describing the difference in DC-DC (cloud or otherwise) workload execution as what I see as either end of a spectrum: VM-centric package mobility or adopting a truly distributed application architecture.  If you’re somewhere in the middle, things like cloudbursting get really hairy.  As we move from IaaS -> PaaS, some of these issues may evaporate as the former (VM’s) becomes less relevant and the latter (Applications deployed directly to platforms) more prevalent.

Check out this zinger from JP Morgenthal which much better conveys what I meant:

If your Tier-1 workloads can run in a public cloud and satisfy all your requirements, THAT’S where they should run in the first place!  You maximize your investment internally by scaling down and ruthlessly squeezing efficiency out of what you have as quickly as possible — writing those investments off the books.

That’s the point, innit?

Cloud bursting — today — is simply a marketing term.

Thoughts?

/Hoff

* This may be the point that requires more clarity, especially in the wake of examples that were raised on Twitter after I posted this such as using eBay and Netflix as examples of successful “cloudbursting” applications.  My response is that these fine companies hardly resemble a typical enterprise but that they’re also investing in a model that fundamentally changes they way they operate.

** I should point out that I am referring to the use case of heterogeneous cloud platforms such as VMware to AWS (either using an import/conversion function and/or via VPC) versus a more homogeneous platform interlock such as when the enterprise runs vSphere internally and looks to migrate VMs over to a VMware vCloud-powered cloud provider using something like vCloud Director Connector, for example.  Either way, the point still stands, if you can run a workload and satisfy your requirements outright on someone else’s stack, why do it on yours?

Enhanced by Zemanta

On Security Conference Themes: Offense *Versus* Defense – Or, Can You Code?

November 22nd, 2010 7 comments

This morning’s dialog on Twitter from @wmremes and @singe reminded me of something that’s been bouncing around in my head for some time.

Wim blogged about a tweet Jeff Moss made regarding Black Hat DC in which he suggested CFP submissions should focus on offense (versus defense.)

Black Hat (and Defcon) have long focused on presentations which highlight novel emerging attacks.  There are generally not a lot of high-profile “defensive” presentations/talks because for the most part, they’re just not sexy, generally they involve hard work/cultural realignment and the reality that as hard as we try, attackers will always out-innovate and out-pace defenders.

More realistically, offense is sexy and offense sells — and it often sells defense.  That’s why vendors sponsor those shows in the first place.

Along these lines, one will notice that within our industry, the defining criterion for the attack versus defend talks and those that give them, is one’s ability to write code and produce tools that demonstrate the vulnerability via exploit.  Conceptual vulnerabilities paired with non-existent exploits are generally thought of as fodder for academia.  Only when a tool that weaponizes an attack shows up do people pay attention.

Zero days rule by definition. There’s no analog on the defensive side unless you buy into marketing like “…ahead of the threat.” *cough* Defense for offense that doesn’t exist generally doesn’t get the majority of the funding 😉

So it’s no wonder that security “rockstars” in our industry are generally those who produce attack/offensive code which illustrate how a vector can be exploited.  It’s tangible.  It’s demonstrable.  It’s sexy.

On the other hand, most defenders are reconciled to using tools that others wrote — or become specialists in the integration of them — in order to parlay some advantage over the ever-increasing wares of the former.

Think of those folks who represent the security industry in terms of mindshare and get the most amount of press.  Overwhelmingly it’s those “hax0rs” who write cool tools — tools that are more offensive in nature, even if they produce results oriented toward allowing practitioners to defend better (or at least that’s how they’re sold.)  That said, there are also some folks who *do* code and *do* create things that are defensive in nature.

I believe the answer lies in balance; we need flashy exploits (no matter how impractical/irrelevant they may be to a large amount of the population) to drive awareness.  We also need  more practitioner/governance talks to give people platforms upon which they can start to architect solutions.  We need more defenders to be able to write code.

Perhaps that’s what Richard Bejtlich meant when he tweeted: “Real security is built, not bought.”  That’s an interesting statement on lots of fronts. I’m selfishly taking Richard’s statement out of context to support my point, so hopefully he’ll forgive me.

That said, I don’t write code.  More specifically, I don’t write code well.  I have hundreds of ideas of things I’d like to do but can’t bridge the gap between ideation and proof-of-concept because I can’t write code.

This is why I often “invent” scenarios I find plausible, talk about them, and then get people thinking about how we would defend against them — usually in the vacuum of either offensive or defensive tools being available, or at least realized.

Sometimes there aren’t good answers.

I hope we focus on this balance more at shows like Black Hat — I’m lucky enough to get to present my “research” there despite it being defensive in nature but we need more defensive tools and talks to make this a reality.

/Hoff

Enhanced by Zemanta

Reflections on SANS ’99 New Orleans: Where It All Started

July 25th, 2010 1 comment

A few weeks ago I saw some RT’s/@’s on Twitter referencing John Flowers and that name brought back some memories.

Today I sent a tweet to John asking him if I remembered correctly that he was at SANS in New Orleans in 1999 when he was still at Hiverworld.

He responded back confirming he was, indeed, at SANS ’99.  I remarked that this was where I first met many of today’s big names in security: Ed Skoudis, Ron Gula, Marty Roesch, Stephen Northcutt, Chris Klaus, JD Glaser, Greg Hoglund, and Bruce Schneier.

John responded back:

I couldn’t agree more.  That was an absolutely amazing time. I was on my second security startup (NodeWarrior Networks,) times were booming and this generation of the security industry as we know it was being given birth to.

I remember many awesome things from that week:

  • Sitting in “Intrusion Detection Shadow Style” with Stephen Northcut and Judy Novak for something like 8 hours going cross-eyed reading tcpdump packet traces and getting every question Stephen asked wrong. Well, some of them, anyway 😉
  • Asking Ron Gula’s wife something about Dragon and her looking back at me like I was a total n00b
  • Asking Ron Gula the same question and having him confirm that I was, in fact, a complete tool
  • Staying up all night drinking, writing code in Perl and doing dangerous things on other people’s networks
  • Participating in my first CTF
  • Almost getting arrested for B&E as I tried to rig the CTF contest by attempting to steal/clone/pwn/replace the HDD in the target machine. The funniest part of that was almost pulling it off (stealing the removable drive) but electrocuting myself in the process — which is what alerted my presence to the security guard.
  • Interrupting Lance Spitzner’s talk by stringing a poster behind him that said “www.lancespitznerismyhero.com” (a domain I registered during the event.)
  • Watching Bruce Schneier scream at the book store guy because they, incredulously, did not stock “Practical Cryptography
  • Sitting down with Ed Skoudis (who was with SAIC at the time, I believe,) looking at one another and wondering just what the hell we were going to do with our careers in security
  • Spending $14,000 (I shit you not, it was the Internet BOOM time, remember) by hitting 6 of the best restaurants in New Orleans with a party of hax0rs and working the charge department at American Express into a frenzy (not to mention actually using the line from Pretty Woman: “we’re going to spend obscene amounts of money here” in order to get in…)
  • Burning the roof of my mouth by not heeding the warnings of the waitress at Cafe Dumonde, biting into a beignet which cauterized my mouth as I simultaneously tried to extinguish the pain with scalding hot Chicory coffee.

I came back from that week knowing with every molecule in my body that even though I’d been “doing” security for 5 years already, it was exactly what I wanted to for the rest of my life.

I have Stephen Northcut to thank for that.  I haven’t been to a SANS since 1999 (don’t ask me why) but I am so excited about going back in August in DC (SANS What Works In Virtualization and Cloud Computing Summit) and giving a keynote at the event.

It’s been a long time.  Too long.

/Hoff

Enhanced by Zemanta