Archive

Archive for the ‘Encryption’ Category

The Curious Case Of Continuous and Consistently Contiguous Crypto…

August 8th, 2013 9 comments

Here’s an interesting resurgence of a security architecture and an operational deployment model that is making a comeback:

Requiring VPN tunneled and MITM’d access to any resource, internal or external, from any source internal or external.

While mobile devices (laptops, phones and tablets) are often deployed with client or client-less VPN endpoint solutions that enable them to move outside the corporate boundary to access internal resources, there’s a marked uptake in the requirement to require that all traffic from all sources utilizing VPNs (SSL/TLS, IPsec or both) to terminate ALL sessions regardless of ownership or location of either the endpoint or the resource being accessed.

Put more simply: require VPN for (id)entity authentication, access control, and confidentiality and then MITM all the things to transparently or forcibly fork to security infrastructure.

Why?

The reasons are pretty easy to understand.  Here are just a few of them:

  1. The user experience shouldn’t change regardless of the access modality or location of the endpoint consumer; the notion of who, what, where, when, how, and why matter, but the user shouldn’t have to care
  2. Whether inside or outside, the notion of split tunneling on a per-service/per-application basis means that we need visibility to understand and correlate traffic patterns and usage
  3. Because the majority of traffic is encrypted (usually via SSL,) security infrastructure needs the capability to inspect traffic (selectively) using a coverage model that is practical and can give a first-step view of activity
  4. Information exfiltration (legitimate and otherwise) is a problem.

…so how are folks approaching this?

Easy.  They simply require that all sessions terminate on a set of  [read: clustered & scaleable] VPN gateways, selectively decrypt based on policy, forward (in serial or parallel) to any number of security apparatus, and in some/many cases, re-encrypt sessions and send them on their way.

We’ve been doing this “forever” with the “outside-in” model (remote access to internal resources,) but the notion that folks are starting to do this ubiquitously on internal networks is the nuance.  AVC (application visibility and control) is the inside-out component (usually using transparent forward proxies with trusted PAC files on endpoints) with remote access and/or reverse proxies like WAFs and/or ADCs as the outside-in use case.

These two ops models were generally viewed and managed as separate problems.  Now thanks to Cloud, Mobility, virtualization and BYOE (bring your own everything) as well as the more skilled and determined set of adversaries, we’re seeing a convergence of the two.  To make the “inside-out” and “outside-in” more interesting, what we’re really talking about here is extending the use case to include “inside-inside” if you catch my drift.

Merging the use case approach at a fundamental architecture level can be useful; this methodology works regardless of source or destination.  It does require all sorts of incidental changes to things like IdM, AAA, certificate management, etc. but it’s one way that folks are trying to centralize the distributed — if you get what I mean.

I may draw a picture to illustrate what I mean, but do let me know if either you’re doing this (many of the largest customers I know are) if it makes sense.

/Hoff

P.S. Remember back in the 80’s/90’s when 3Com bundled NIC cards with integrated IPSec VPN capability?  Yeah, that.

Enhanced by Zemanta

MashSSL – An Excellent Idea You’ve Probably Never Heard Of…

January 30th, 2010 No comments

I’ve been meaning to write about MashSSL for a while as it occurs to me that this is a particularly elegant solution to some very real challenges we have today.  Trusting the browser, operator of said browser or a web service when using multi-party web applications is a fatal flaw.

We’re struggling with how to deal with authentication in distributed web and cloud applications. MashSSL seems as though it’s a candidate for the toolbox of solutions:

MashSSL allows web applications to mutually authenticate and establish a secure channel without having to trust the user or the browser. MashSSL is a Layer 7 security protocol running within HTTP in a RESTful fashion. It uses an innovation called “friend in the middle” to turn the proven SSL protocol into a multi-party protocol that inherits SSL’s security, efficiency and mature trust infrastructure

Make sure you check out the sections on “Why and How,” especially the “MashSSL Overview” section which explains how it works.

I should mention the code is also open source.

/Hoff

Endpoint Security vs. DLP? That’s Part Of the Problem…

March 31st, 2008 6 comments

Sandisk
Larry Walsh wrote something (Defining the Difference Between Endpoint Security and Data Loss Prevention) that sparked an interesting debate based upon a vendor presentation given to him on "endpoint security" by SanDisk.

SanDisk is bringing to market a set of high-capacity USB flash drives that feature built-in filesystem encryption as well as strong authentication and access control.  If the device gets lost with the data on it, it’s "safe and secure" because it’s encrypted.  They are positioning this as an "endpoint security" solution.

I’m not going to debate the merits/downsides of that approach because I haven’t seen their pitch, but suffice it to say, I think it’s missing a "couple" of pieces to solve anything other than a very specific set of business problems.

Larry’s dilemma stems from the fact that he maintains that this capability and functionality is really about data loss protection and doesn’t have much to do with "endpoint security" at all:

We debated that in my office for a few minutes. From my perspective, this solution seems more like a data loss prevention solution than endpoint security. Admittedly, there are many flavors of endpoint security. When I think of endpoint security, I think of network access control (NAC), configuration management, vulnerability management and security policy enforcement. While this solution is designed for the endpoint client, it doesn’t do any of the above tasks. Rather, it forces users to use one type of portable media and transparently applies security protection to the data. To me, that’s DLP.

In today’s market taxonomy, I would agree with Larry.  However, what Larry is struggling with is not really the current state of DLP versus "endpoint security," but rather the future state of converged information-centric governance.  He’s describing the problem that will drive the solution as well as the inevitable market consolidation to follow.

This is actually the whole reason Mogull and I are talking about the evolution of DLP as it exists today to a converged solution we call CMMP — Content Management, Monitoring and Protection. {Yes, I just added another M for Management in there…}

What CMMP represents is the evolved and converged end-state technology integration of solutions that today provide a point solution but "tomorrow" will be combined/converged into a larger suite of services.

Off the cuff, I’d expect that we will see at a minimum the following technologies being integrated to deliver CMMP as a pervasive function across the information lifecycle and across platforms in flight/motion and at rest:

  • Data leakage/loss protection (DLP)
  • Identity and access management (IAM)
  • Network Admission/Access Control (NAC)
  • Digital rights/Enterprise rights management (DRM/ERM)
  • Seamless encryption based upon "communities of interest"
  • Information classification and profiling
  • Metadata
  • Deep Packet Inspection (DPI)
  • Vulnerability Management
  • Configuration Management
  • Database Activity Monitoring (DAM)
  • Application and Database Monitoring and Protection (ADMP)
  • etc…

That’s not to say they’ll all end up as a single software install or network appliance, but rather a consolidated family of solutions from a few top-tier vendors who have coverage across the application, host and network space. 

If you were to look at any enterprise today struggling with this problem, they likely have or are planning to have most of the point solutions above anyway.  The difficulty is that they’re all from different vendors.  In the future, we’ll see larger suites from fewer vendors providing a more cohesive solution.

This really gives us the "cross domain information protection" that Rich talks about.

We may never achieve the end-state described above in its entirety, but it’s safe to say that the more we focus on the "endpoint" rather than the "information on the endpoint," the bigger the problem we will have.

/Hoff

Pondering Implications On Standards & Products Due To Cold Boot Attacks On Encryption Keys

February 22nd, 2008 4 comments

Scientist
You’ve no doubt seen the latest handywork of Ed Felten and his team from the Princeton Center for Information Technology Policy regarding cold boot attacks on encryption keys:

Abstract: Contrary to popular assumption, DRAMs used in
most modern computers retain their contents for seconds to minutes
after power is lost, even at operating temperatures and even if removed
from a motherboard. Although DRAMs become less reliable when they are
not refreshed, they are not immediately erased, and their contents
persist sufficiently for malicious (or forensic) acquisition of usable
full-system memory images. We show that this phenomenon limits the
ability of an operating system to protect cryptographic key material
from an attacker with physical access. We use cold reboots to mount
attacks on popular disk encryption systems — BitLocker, FileVault,
dm-crypt, and TrueCrypt — using no special devices or materials. We
experimentally characterize the extent and predictability of memory
remanence and report that remanence times can be increased dramatically
with simple techniques. We offer new algorithms for finding
cryptographic keys in memory images and for correcting errors caused by
bit decay. Though we discuss several strategies for partially
mitigating these risks, we know of no simple remedy that would
eliminate them.

Check out the video below (if you have scripting disabled, here’s the link.)  Fascinating and scary stuff.

Would a TPM implementation mitigate this if they keys weren’t stored (even temporarily) in RAM?

Given the surge lately toward full disk encryption products, I wonder how the market will react to this.  I am interested in both the broad industry impact and response from vendors.  I won’t be surprised if we see new products crop up in a matter of days advertising magical defenses against such attacks as well as vendors scrambling to do damage control.

This might be a bit of a reach, but equally as interesting to me are the potential implications upon DoD/Military crypto standards such as FIPS140.2 ( I believe the draft of 140.3 is circulating…)  In the case of certain products at specific security levels, it’s obvious based on the video that one wouldn’t necessarily need physical access to a crypto module (or RAM) in order to potentially attack it.

It’s always amazing to me when really smart people think of really creative, innovative and (in some cases) obvious ways of examining what we all take for granted.

Thin Clients: Does This Laptop Make My Ass(ets) Look Fat?

January 10th, 2008 11 comments

Phatburger_2
Juicy Fat Assets, Ripe For the Picking…

So here’s an interesting spin on de/re-perimeterization…if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:

For the majority of mobile and internal users in a typical corporation who use the basic set of applications:

  1. Assume a company that:
    …fits within the 90% of those who still have data centers, isn’t completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn’t plan on utilizing distributed grid computing and widespread third-party SaaS
  2. Take the following:
    Data Breaches.  Lost Laptops.  Non-sanitized corporate hard drives on eBay.  Malware.  Non-compliant asset configurations.  Patching woes.  Hardware failures.  Device Failure.  Remote Backup issues.  Endpoint Security Software Sprawl.  Skyrocketing security/compliance costs.  Lost Customer Confidence.  Fines.  Lost Revenue.  Reduced budget.
  3. Combine With:
    Cheap Bandwidth.  Lots of types of bandwidth/access modalities.  Centralized Applications and Data. Any Web-enabled Computing Platform.  SSL VPN.  Virtualization.  Centralized Encryption at Rest.  IAM.  DLP/CMP.  Lots of choices to provide thin-client/streaming desktop capability.  Offline-capable Web Apps.
  4. Shake Well, Re-allocate Funding, Streamline Operations and "Security"…
  5. You Get:
    Less Risk.  Less Cost.  Better Control Over Data.  More "Secure" Operations.  Better Resilience.  Assurance of Information.  Simplified Operations. Easier Backup.  One Version of the Truth (data.)

I really just don’t get why we continue to deploy and are forced to support remote platforms we can’t protect, allow our data to inhabit islands we can’t control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can’t possibly solve the problems.

If we’re going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.

If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren’t we?

…I mean besides the fact that an entire industry has been leeching off this mess for decades…


I’ll Gladly Pay You Tuesday For A Secure Solution Today…

The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.

Many people are doing that with the servers already with the adoption of virtualization.  Now they need to do with their clients.

The only reason we’re now going absolutely stupid and spending money on securing endpoints in their current state is because we’re CAUSING (not just allowing) data to leave our enclaves.  In fact with all this blabla2.0 hype, we’ve convinced ourselves we must.

Hogwash.  I’ve posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms.  How do you think many of them do this?

Relax, Dude…Keep Your Firewalls…

In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble.  Instead of worrying about the endpoint, it’s nothing more than a flickering display with a keyboard/mouse.

Let your kid use Limewire.  Let Uncle Bob surf pr0n.  Let wifey download spyware.  If my data and applications don’t live on the machine and all the clicks/mouseys are just screen updates, what do I care?

Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play.  Let’s keep the discussion within the 80% of reasonable factored arguments.  We’ll never eliminate 100% and we don’t have to in order to be successful.

Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.

This goes for internal corporate users who are chained to their desks and not just mobile users.

What’s preventing you from doing this today?

/Hoff

Network Security is Dead…It’s All About the Host.

May 28th, 2007 6 comments

Securitycomputer2No, not entirely as it’s really about the data, but
I had an epiphany last week. 

I didn’t get any on me, but I was really excited about the — brace yourself — future of security in a meeting I had with Microsoft.  It reaffirmed my belief that while some low-hanging security fruit will be picked off by the network, the majority of the security value won’t be delivered by it.

I didn’t think I’d recognize just how much of it — in such a short time — will ultimately make its way back into the host (OS,) and perhaps you didn’t either.

We started with centralized host-based computing, moved to client-server.  We’ve had Web1.0, are in the  beginnings of WebX.0 and I ultimately believe that we’re headed back to a centralized host-based paradigm now that the network transport is fast, reliable and cheap.

That means that a bunch of the stuff we use today to secure the "network" will gravitate back towards the host. I’ve used Scott McNealy’s mantra as he intended it to in order to provide some color to conversations before, but I’m going to butcher it here. 

While I agree that in abstract the "Network is the Computer," in order to secure it, you’re going to have to treat the "network" like an OS…hard to do.   That’s why I think more and more security will make its way back to the actual
"computer" instead.

So much of the strategy linked to large security vendors sees an increase in footprint back on the host.  It’s showing back up there today in the guise of AV, HIPS, configuration management, NAC and Extrusion Prevention, but it’s going to play a much, much loftier role as time goes on as the level of interaction and interoperability must increase.  Rather than put 10+ agents on a box, imagine if that stuff was already built in?

Heresy, I suppose.

I wager that the "you can’t trust the endpoint" and "all security will make its way into the switch" crowds will start yapping on this point, but before that happens, let me explain…

The Microsoft Factor

Vista_box_2
I was fortunate enough to sit down with some of the key players in Microsoft’s security team last week and engage in a lively bit of banter regarding some both practical and esoteric elements of where security has been, is now and will be in the near future. 

On the tail of Mr. Chambers’ Interop keynote, the discussion was all abuzz regarding collaboration and WebX.0 and the wonders that will come of the technology levers in the near future as well as the, ahem, security challenges that this new world order will bring.  I’ll cover that little gem in another blog entry.

Some of us wanted to curl up into a fetal position.  Others saw a chance to correct material defects in the way in which the intersection of networking and security has been approached.  I think the combination of the two is natural and healthy and ultimately quite predictable in these conversations.

I did a bit of both, honestly.

As you can guess, given who I was talking to, much of what was discussed found its way back to a host-centric view of security with a heavy anchoring in the continued evolution of producing more secure operating systems, more secure code, more secure protocols and strong authentication paired with encryption.

I expected to roll my eyes a lot and figured that our conversation would gravitate towards UAC and that a bulk-helping of vapor functionality would be dispensed with the normal disclaimers urging "…when it’s available one day" as a helping would be ladled generously into the dog-food bowls the Microsofties were eating from.

I am really glad I was wrong, and it just goes to show you that it’s important to consider a balanced scorecard in all this; listen with two holes, talk with one…preferably the correct one 😉

I may be shot for saying this in the court of popular opinion, but I think Microsoft is really doing a fantastic job in their renewed efforts toward improving security.  It’s not perfect, but the security industry is such a fickle and bipolar mistress — if you’re not 100% you’re a zero.

After spending all this time urging people that the future of security will not be delivered in the network proper, I have not focused enough attention on the advancements that are indeed creeping their way into the OS’s toward a more secure future as  this inertia orthogonally reinforces my point.

Yes, I work for a company that provides network-centric security offerings.  Does this contradict the statement I just made?  I don’t think so, and neither did the folks from Microsoft.  There will always be a need to consolidate certain security functionality that does not fit within the context of the host — at least within an acceptable timeframe as the nature of security continues to evolve.  Read on.

The network will become transparent.  Why?

In this brave new world, mutually-authenticated and encrypted network communications won’t be visible to the majority of the plumbing that’s transporting it, so short of the specific shunts to the residual overlay solutions that will still be present to the network in forms of controls that will not make their way to the host, the network isn’t going to add much security value at all.

The Jericho EffectJerichoeps_2

What I found interesting is that I’ve enjoyed similar discussions with the distinguished fellows of the Jericho Forum wherein after we’ve debated the merits of WHAT you might call it, the notion of HOW "deperimeterization," "reperimeterization," (or my favorite) "radical externalization,"  weighs heavily on the evolution of security as we know it.

I have to admit that I’ve been a bit harsh on the Jericho boys before, but Paul Simmonds and I (or at least I did) came to the realization that my allergic reaction wasn’t to the concepts at hand, but rather the abrasive marketing of the message.  Live and learn.

Both sets of conversations basically see the pendulum effect of security in action in this oversimplification of what Jericho posits is the future of security and what Microsoft can deliver — today:

Take a host with a secured OS, connect it into any network using whatever means you find
appropriate, without regard for having to think about whether you’re on the "inside" or "outside." Communicate securely, access and exchange data in policy-defined "zones of trust" using open, secure, authenticated and encrypted protocols.

If you’re interested in the non-butchered more specific elements of the Jericho Forum’s "10 Commandments," see here.

What I wasn’t expecting in marrying these two classes of conversation is that this future of security is much closer and notably much more possible than I readily expected…with a Microsoft OS, no less.   In fact, I got a demonstration of it.  It may seem like no big deal to some of you, but the underlying architectural enhancements to Microsoft’s Vista and Longhorn OS’s are a fantastic improvement on what we have had to put up thus far.

One of the Microsoft guys fired up his laptop with a standard-issue off-the-shelf edition of Vista,  authenticated with his smartcard, transparently attached to the hotel’s open wireless network and then took me on a tour of some non-privileged internal Microsoft network resources.

Then he showed me some of the ad-hoc collaborative "People Near Me" peer2peer tools built into Vista — same sorts of functionality…transparent, collaborative and apparently quite secure (gasp!) all at the same time.

It was all mutually authenticated and encrypted and done so transparently to him.

He didn’t "do" anything; no VPN clients, no split-brain tunneling, no additional Active-X agents, no SSL or IPSec shims…it’s the integrated functionality provided by both IPv6 and IPSec in the NextGen IP stack present in Vista.

And in his words "it just works."   Yes it does.

He basically established connectivity and his machine reached out to an reachable read-only DC (after auth. and with encryption) which allowed him to transparently resolve "internal" vs. "external" resources.  Yes, the requirements of today expect that the OS must still evolve to prevent exploit of the OS, but this too shall become better over time.

No, it obviously doesn’t address what happens if you’re using a Mac or Linux, but the pressure will be on to provide the same sort of transparent, collaborative and secure functionality across those OS’s, too.

Allow me my generalizations — I know that security isn’t fixed and that we still have problems, but think of this as a half-glass full, willya!?

One of the other benefits I got form this conversation is the reminder that as Vista and Longhorn default to IPv6 natively (they can do both v4&v6 dynamically,) as enterprises upgrade, the network hardware and software (and hence the existing security architecture) must also be able to support IPv6 natively.  It’s not just the government pushing v6, large enterprises are now standardizing on it, too.

Here are some excellent links describing the Nextgen IP stack in Vista, the native support for IPSec (goodbye VPN market,) and IPv6 support.

Funny how people keep talking about Google being a threat to Microsoft.  I think that the network giants like Cisco might have their hands full with Microsoft…look at how each of them are maneuvering.

/Hoff
{ Typing this on my Mac…staring @ a Vista Box I’m waiting to open to install within Parallels 😉 }

On Flying Pigs, DNSSEC, and embedded versus overlaid security…

April 2nd, 2007 4 comments

Flyingpig_2
I found Thomas Ptacek’s comments regarding DNSSEC deliciously ironic not for anything directly related to secure DNS, but rather a point he made in substantiating his position regarding DNSSEC while describing the intelligence (or lack thereof) of the network and application layers.

This may have just been oversight on his part, but it occurs to me that I’ve witnessed something on the order of a polar magnetic inversion of sorts.  Or not.  Maybe it’s the coffee.  Ethiopian Yirgacheffe does that to me.

Specifically, Thomas and I have debated previously about this topic and my contention is that the network plumbing ought to be fast, reliable, resilient and dumb whilst elements such as security and applications should make up a service layer of intelligence running atop the pipes. 

Thomas’ assertions focus on the manifest destiny that Cisco will rule the interconnected universe and that security, amongst other things, will — and more importantly should — become absorbed into and provided by the network switches and routers.

While Thomas’ arguments below are admittedly regarding the "Internet" versus the "Intranet," I maintain that the issues are the same.  It seems that his statements below which appear to endorse the "…end-to-end argument in system design" regarding the "…fundamental design principle of the Intenet" are at odds with his previous aspersions regarding my belief.  Check out the bits in red.

Here’s what Thomas said in "A Case Against DNSSSEC (A Matasano Miniseries):

…You know what? I don’t even agree in principle. DNSSEC is a bad thing, even
if it does work.

How could that possibly be?

It violates a fundamental design principle of the Internet.

Nonsense. DNSSEC was designed and endorsed by several of the
architects of the Internet. What principle would they be violating?

The end-to-end argument in system design. It says that you want to
keep the Internet dumb and the applications smart. But DNSSEC does the
opposite. It says, “Applications aren’t smart enough to provide
security, and end-users pay the price. So we’re going to bake security
into the infrastructure.”

I could have sworn that the bit in italics is exactly what Thomas used to say.  Beautiful.  If, Thomas truly agrees with this axiom and that indeed the Internet (the plumbing) is supposed to be dumb and applications (service layer) smart, then I suggest he should revisit his rants regarding how he believes the embedding security in the nework is a good idea since it invalidates the very "foundation" of the Internet.

I wonder what that’ll do internal networks? 

That’s all.  CSI is on.

/Hoff

(Written @ Home drinking Yirgacheffe watching UFC re-runs)

Full Drive Encryption on Laptops – Time for all of us to “nut up or shut up!”

June 11th, 2006 7 comments

Laptopmitm275300
…or "He who liveth in glass houses should either learn to throw small stones or investeth in glass insurance…lots and lots of glass insurance. I, by the way, have lots and lots of glass insurance ;)"

Given all of the recently disclosed privacy/identity breaches which have been demonstrated as a result of stolen laptops inappropriately containing confidential data, we’ve had an exponential increase in posts in the security blogosphere in regards to this matter.

This is to be expected.  This is what we do.  It’s the desperate housewives complex. 😉

These posts come from the many security experts, analysts, pundits and IT Professionals bemoaning the obvious poor application of policies, procedures, technology and standards that would "prevent" this sort of thing from happening and calling for the heads of those responsible…of the very people who not only perpertrated the crime, but also those responsible for making the crime possible; the monkey who put the data on the laptop in the first place.

So, since most of us who are "security experts" or IT professionals almost always utilize laptops in our lines of work, I ask you to honestly respond in comments below to the following question:

What whole-disk encryption solution utilizing two-factor authentication do you use to prevent an exposure of data should your laptop fall into the wrong hands?  You *do* use a whole-disk encryption solution utilizing two-factor authentication to secure the data on your laptop…don’t you?

Be honest. If you don’t use a solution like this then please don’t post another thing on this topic condemning anyone else.  Ever.

Sure, you may say that you don’t keep confidential information on your laptop and that’s great.  However, if you’ve got email and you’re involved in a company as a security/IT person (or management or even as a general user,) that argument’s already in the bullshit hopper.

If you say that you use encryption for specifically identified "confidential" files and information but still use a web-browser or any Office product on a Windows platform,  for example, please reference the aforementioned bovine excrement container.  It’s filling up fast, eh?

See where this is going?  If we, the keepers of the gate, don’t implement this sort of solution and we still gabble on about how crappy these errant users are, how irresponsible their bosses, how aware we should make and liable we should hold their Board of Directors, the government, etc…

I’ll ask you the same question about that USB thumb drive you have hanging on your keychain, too.

Don’t be a hyprocrite…encrypt yo shizzle.

If you don’t already, stop telling everyone else what lousy humans they are for not doing this and instead focus on getting something like this, or at a minimum, this.

/Chris