J-Law Nudie Pics, Jeremiah, Privacy and Dropbox – An Epic FAIL of Mutual Distraction

September 2nd, 2014 No comments

onedoesnotFrom the “It can happen to anyone” department…

A couple of days ago, prior to the announcement that hundreds of celebrities’ nudie shots were liberated from their owners and posted to the Web, I customized some Growl notifications on my Mac to provide some additional realtime auditing of some apps I was interested in.  One of the applications I enabled was Dropbox synch messaging so I could monitor some sharing activity.

Ordinarily, these two events would not be related except I was also tracking down a local disk utilization issue that was vexing me as on a day-to-day basis as my local SSD storage would ephemerally increase/decrease by GIGABYTES and I couldn’t figure out why.

So this evening, quite literally as I was reading RSnake’s interesting blog post titled “So your nude selfies were just hacked,” a Growl notification popped up informing me that several new Dropbox files were completing synchronization.

Puzzled because I wasn’t aware of any public shares and/or remote folders I was synching, I checked the Dropbox synch status and saw a number of files that were unfamiliar — and yet the names of the files certainly piqued my interest…they appeared to belong to a very good friend of mine given their titles. o_O

I checked the folder these files were resting in — gigabytes of them — and realized it was a shared folder that I had setup 3 years ago to allow a friend of mine to share a video from one of our infamous Jiu Jitsu smackdown sessions from the RSA Security Conference.  I hadn’t bothered to unshare said folder for years, especially since my cloud storage quota kept increasing while my local storage didn’t.

As I put 1 and 1 together, I realized that for at least a couple of years, Jeremiah (Grossman) had been using this dropbox share folder titled “Dropit” as a repository for file storage, thinking it was HIS!

This is why gigs of storage was appearing/disappearing from my local storage when he added/removed files, but I didn’t  see the synch messages and thus didn’t see the filenames.

I jumped on Twitter and engaged Jer in a DM session (see below) where I was laughing so hard I was crying…he eventually called me and I walked him through what happened.

Once we came to terms of what had happened, how much fun I could have with this, Jer ultimately copied the files off the share and I unshared the Dropbox folder.

We agreed it was important to share this event because like previous issues each of us have had, we’re all about honest disclosure so we (and others) can learn from our mistakes.

The lessons learned?

  1. Dropbox doesn’t make it clear whether a folder that’s shared and mounted is yours or someone else’s — they look the same.
  2. Ensure you know where your data is synching to!  Services like Dropbox, iCloud, Google Drive, SkyDrive, etc. make it VERY easy to forget where things are actually stored!
  3. Check your logs and/or enable things like Growl notifications (on the Mac) to ensure you can see when things are happening
  4. Unshare things when you’re done.  Audit these services regularly.
  5. Even seasoned security pros can make basic security/privacy mistakes; I shared a folder and didn’t audit it and Jer put stuff in a folder he thought was his.  It wasn’t.
  6. Never store nudie pics on a folder you don’t encrypt — and as far as I can tell, Jer didn’t…but I DIDN’T CLICK…HONEST!

Jer and I laughed our asses off, but imagine if this had been confidential information or embarrassing pictures and I wasn’t his friend.

If you use Dropbox or similar services, please pay attention.

I don’t want to see your junk.

/Hoff

P.S. Thanks for being a good sport, Jer.

P.P.S. I about died laughing sending these DMs:

Jer-Twitter

 

How To Be a Cloud Mogul(l) – Our 2014 RSA “Dueling Banjos/Cloud/DevOps” Talk

March 27th, 2014 No comments

dueling_banjosRich Mogull (Securosis) and I have given  a standing set of talks over the last 5-6 years at the RSA Security Conference that focus on innovation, disruption and ultimately making security practitioners more relevant in the face of all this churn.

We’ve always offered practical peeks of what’s coming and what folks can do to prepare.

This year, we (I should say mostly Rich) built a bunch of Ruby code that leveraged stuff running in Amazon Web Services (and using other Cloud services) to show how security folks with little “coding” capabilities could build and deploy this themselves.

Specifically, this talk was about SecDevOps — using principles that allow for automated and elastic cloud services to do interesting security things that can be leveraged in public and private clouds using Chef and other assorted mechanisms.

I also built a bunch of stuff using the RackSpace Private Cloud stack and Chef, but didn’t have the wherewithal or time to demonstrate it — and doing live demos over a tethered iPad connection to AWS meant that if it sucked, it was Rich’s fault.

You can find the presentation here (it clearly doesn’t include the live demos):

Dueling Banjos – Cloud vs. Enterprise Security: Using Automation and (Sec)DevOps NOW

/Hoff

 

On the Topic Of ‘Stopping’ DDoS.

March 10th, 2014 11 comments

The insufferable fatigue of imprecise language with respect to “stopping” DDoS attacks caused me to tweet something that my pal @CSOAndy suggested was just as pedantic and wrong as that against which I railed:

The long and short of Andy’s displeasure with my comment was:

to which I responded:

…and then…

My point, ultimately, is that in the context of DDoS mitigation such as offload scrubbing services, unless one renders the attacker(s) from generating traffic, the attack is not “stopped.”  If a scrubbing service redirects traffic and absorbs it, and the attacker continues to send packets, the “attack” continues because the attacker has not been stopped — he/she/they have been redirected.

Now, has the OUTCOME changed?  Absolutely.  Has the intended victim possibly been spared the resultant denial of service?  Quite possibly.  Could there even now possibly be extra “space in the pipe?” Uh huh.

Has the attack “stopped” or ceased?  Nope.  Not until the spice stops flowing.

Nuance?  Pedantry?  Sure.

Wrong?  I don’t think so.

/Hoff

Categories: Uncategorized Tags:

The Easiest $20 I ever saved…

March 2nd, 2014 5 comments

20dollarbillDuring the 2014 RSA Conference, I participated on a repeating panel with Bret Hartman, CTO of Cisco’s Security Business Unit and Martin Brown from BT.  The first day was moderated by Jon Olstik while the second day, the three of us were left to, um, self-moderate.

It occurred to me that during our very lively (and packed) second day wherein the audience was extremely interactive,  I should boost the challenge I made to the audience on day one by offering a little monetary encouragement in answering a question.

Since the panel was titled “Network Security Smackdown: Which Technologies Will Survive?,” I offered a $20 kicker to anyone who could come up with a legitimate counter example — give me one “network security” technology that has actually gone away in the last 20 years.

<chirp chirp>

Despite Bret trying to pocket the money and many folks trying valiantly to answer, I still have my twenty bucks.

I’ll leave the conclusion as an exercise for the reader.

/Hoff

Categories: General Rants & Raves Tags:

NGFW = No Good For Workloads…

February 13th, 2014 3 comments

lion_dog-93478So-called Next Generation Firewalls (NGFW) are those that extend “traditional port firewalls” with the added context of policy with application visibility and control to include user identity while enforcing security, compliance and productivity decisions to flows from internal users to the Internet.

NGFW, as defined, is a campus and branch solution. Campus and Branch NGFW solves the “inside-out” problem — applying policy from a number of known/identified users on the “inside” to a potentially infinite number of applications and services “outside” the firewall, generally connected to the Internet. They function generally as forward proxies with various network insertion strategies.

Campus and Branch NGFW is NOT a Data Center NGFW solution.

Data Center NGFW is the inverse of the “inside-out” problem.  They solve the “outside-in” problem; applying policy from a potentially infinite number of unknown (or potentially unknown) users/clients on the “outside” to a nominally diminutive number of well-known applications and services “inside” the firewall that are exposed generally to the Internet.  They function generally as reverse proxies with various network insertion strategies.

Campus and Branch NGFWs need to provide application visibility and control across potentially tens of thousands of applications, many of which are evasive.

Data Center NGFWs need to provide application visibility and control across a significantly fewer number of well-known managed applications, many of which are bespoke.

There are wholesale differences in performance, scale and complexity between “inside-out” and “outside-in” firewalls.  They solve different problems.

The things that make a NGFW supposedly “special” and different from a “traditional port firewall” in a Campus & Branch environment are largely irrelevant in the Data Center.  Speaking of which, you’d find it difficult to find solutions today that are simply “traditional port firewalls”; the notion that firewalls integrated with IPS, UTM, ALGs, proxies, integrated user authentication, application identification/granular control (AVC), etc., are somehow incapable of providing the same outcome is now largely a marketing distinction.

While both sets of NGFW solutions share a valid deployment scenario at the “edge” or perimeter of a network (C&B or DC,) a further differentiation in DC NGFW is the notion of deployment in the so-called “core” of a network.  The requirements in this scenario mean comparing the deployment scenarios is comparing apples and oranges.

Firstly, the notion of a “core” is quickly becoming an anachronism from the perspective of architectural references, especially given the advent of collapsed network tiers and fabrics as well as the impact of virtualization, cloud and network virtualization (nee SDN) models.  Shunting a firewall into these models is often difficult, no matter how many interfaces.  Flows are also asynchronous and often times stateless.

Traditional Data Center segmentation strategies are becoming a blended mix of physical isolation (usually for compliance and/or peace of mind o_O) with a virtualized overlay provided in the hypervisor and/or virtual appliances.  Shifts in traffic patterns include a majority of machine-to-machine in east-west direction via intra-enclave “pods” are far more common.  Dumping all flows through one (or a cluster) of firewalls at the “core” does what, exactly — besides adding latency and often times obscured or unnecessary inspection.

Add to this the complexity of certain verticals in the DC where extreme low-latency “firewalls” are needed with requirements at 5 microseconds or less.  The sorts of things people care about enforcing from a policy perspective aren’t exactly “next generation.”  Or, then again, how about DC firewalls that work at the mobile service provider eNodeB, mobile packet core or Gi with specific protocol requirements not generally found in the “Enterprise?”

In these scenarios, claims that a Campus & Branch NGFW is tuned to defend against “outside-in” application level attacks against workloads hosted in a Data Center is specious at best.  Slapping a bunch of those Campus & Branch firewalls together in a chassis and calling it a Data Center NGFW invokes ROFLcoptr.

Show me how a forward-proxy optimized C&B NGFW deals with a DDoS attack (assuming the pipe isn’t flooded in the first place.)  Show me how a forward-proxy optimized C&B NGFW deals with application level attacks manipulating business logic and webapp attack vectors across known-good or unknown inputs.

They don’t.  So don’t believe the marketing.

I haven’t even mentioned the operational model and expertise deltas needed to manage the two.  Or integration between physical and virtual zoning, or on/off-box automation and visibility to orchestration systems such that policies are more dynamic and “virtualization aware” in nature…

In my opinion, NGFW is being redefined by the addition of functionality that again differentiates C&B from DC based on use case.  Here are JUST two of them:

  • C&B NGFW is becoming what I call C&B NGFW+, specifically the addition of advanced anti-malware (AAMW) capabilities at the edge to detect and prevent infection as part of the “inside-out” use case.  This includes adjacent solutions that include other components and delivery models.
  • DC NGFW is becoming DC NGFW+, specifically the addition of (web) application security capabilities and DoS/DDoS capabilities to prevent (generally) externally-originated attacks against internally-hosted (web) applications.  This, too, requires the collaboration of other solutions specifically designed to enable security in this use case.

There are hybrid models that often take BOTH solutions to adequately protect against client infection, distribution and exploitation in the C&B to prevent attacks against DC assets connected over the WAN or a VPN.  

Pretending both use cases are the same is farcical.

It’s unlikely you’ll see a shift in analyst “Enchanted Dodecahedrons” relative to functionality/definition of NGFW because…strangely…people aren’t generally buying Campus and Branch NGFW for their datacenters because they’re trying to solve different problems.  At different levels of scale and performance.

A Campus and Branch NGFW is “No Good For Workloads” in the Data Center.  

/Hoff

Maslow’s Hierarchy Of Security Product Needs & Vendor Selection…

November 21st, 2013 1 comment

Interpretation is left as an exercise for the reader ;)  This went a tad bacterial (viral is too strong of a description) on Twitter:

maslow-v2_9

 

Categories: General Rants & Raves Tags:

My Information Security Magazine Cover Story: “Virtualization security dynamics get old, changes ahead”

November 4th, 2013 2 comments

ISM_cover_1113This month’s Search Security (nee Information Security Magazine) cover story was penned by none other than your’s truly and titled “Virtualization security dynamics get old, changes ahead”

I hope you enjoy the story; its a retrospective regarding the beginnings of security in the virtual space, where we are now, and we we’re headed.

I tried very hard to make this a vendor-neutral look at the state of the union of virtual security.

I hope that it’s useful.

You can find the story here.

/Hoff

Enhanced by Zemanta

The Curious Case Of Continuous and Consistently Contiguous Crypto…

August 8th, 2013 9 comments

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

/Hoff

P.S. Remember back in the 80’s/90’s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta

Incomplete Thought: The Psychology Of Red Teaming Failure – Do Not Pass Go…

August 6th, 2013 14 comments
team fortress red team

team fortress red team (Photo credit: gtrwndr87)

I could probably just ask this of some of my friends — many of whom are the best in the business when it comes to Red Teaming/Pen Testing, but I thought it would be an interesting little dialog here, in the open:

When a Red Team is engaged by an entity to perform a legally-authorized pentest (physical or electronic) with an explicit “get out of jail free card,” does that change the tactics, strategy and risk appetite of the team were they not to have that parachute?

Specifically, does the team dial-up or dial-down the aggressiveness of the approach and execution KNOWING that they won’t be prosecuted, go to jail, etc.?

Blackhats and criminals operating outside this envelope don’t have the luxury of counting on a gilded escape should failure occur and thus the risk/reward mapping *might* be quite different.

To that point, I wonder what the gap is between an authorized Red Team action versus those that have everything to lose?  What say ye?

/Hoff

Enhanced by Zemanta

Incomplete Thought: In-Line Security Devices & the Fallacies Of Block Mode

June 28th, 2013 16 comments

blockadeThe results of a long-running series of extremely scientific studies has produced a Metric Crapload™ of anecdata.

Namely, hundreds of detailed discussions (read: lots of booze and whining) over the last 5 years has resulted in the following:

Most in-line security appliances (excluding firewalls) with the ability to actively dispose of traffic — services such as IPS, WAF, Anti-malware — are deployed in “monitor” or “learning” mode are rarely, if ever, enabled with automated blocking.  In essence, they are deployed as detective versus preventative security services.

I have many reasons compiled for this.

I am interested in hearing whether you agree/disagree and your reasons for such.

/Hoff

Enhanced by Zemanta