July 2014
What Spies Do (20 July 2014)
What Should PGP Look Like? (22 July 2014)

What Spies Do

20 July 2014

There’s been a lot of outcry in the wake of the Snowden revelations about retention of data on non-targeted indivduals. I think that a lot of this is due to confusion about what intelligence agencies do, as contrasted with the more familiar requirements and procedures of law enforcement. Basically, the two groups are almost entirely dissimilar, in what they need and what they do. Reasoning from the needs of of the police to understand the needs of the intelligence community just doesn’t work. Conversely, letting the police act like intelligence agencies is a often threat to due process. Note that I’m speaking of the intelligence versus law enforcement roles, as opposed to organizations. Mixing the two roles is what causes trouble.

The goal of a police investigation is prosecution and conviciton of malefactors. They need evidence that is legally admissible, they have to disclose their evidence in court (and turn over exculpatory evidence to the defense), and prove guilt beyond a resonable doubt, in the face of opposition by a defense attorney. Information that can’t be used in this fashion isn’t useful to them; it’s quite proper to insist that it not be collected or retained.

None of that is true for intelligence agencies. There’s no question of admissibility, only reliability. There’s no due process, no requirement to disclose anything, no adversarial process in anything like the same fashion. Intelligence agencies virtually never know anything beyond a reasonable doubt—and if they think they do, they’ll worry that they’ve been misled by disinformation. It’s always a question of what they can glean from fragmentary sources. Furthermore, they often succeed solely because of their historical and contextual background: something is useful solely because of apparently inconsequential information they’d gathered years earlier and retained against possible future utility. (The best accessible descriptions of how this is done can be found in intelligence histories of World War II; most later stuff is either still classified or far too hard to find.)

The most troubling recent story, to some, was the Washington Post’s description of how the NSA monitored and kept the conversations between an Australian woman and her Taliban-sympathizing boyfriend.

Looking back, the young woman said she understands why her intimate correspondence was recorded and parsed by men and women she did not know.

"Do I feel violated?" she asked. "Yes. I’m not against the fact that my privacy was violated in this instance, because he was stupid. He wasn’t thinking straight. I don’t agree with what he was doing."

What she does not understand, she said, is why after all this time, with the case long closed and her own job with the Australian government secure, the NSA does not discard what it no longer needs.

Why might this still be relevant? Let’s try a hypothetical example.

Suppose that the CIA would like to spy on the Taliban. Unlike in the movies or even law enforcement, real operatives are rarely the actual intellligence gatherers. They can’t be; they’re too obviously outsiders (though there are certainly exceptions like Eli Cohen). Instead, the typical agent is a handler, someone who manages an insider who has either volunteered to spy or been "turned". Spies’ skills are applied psychology, manipulation, and perhaps lying, extortion, and bribery. Donnie Brasco equivalents don’t infiltrate the Taliban; rather, the CIA or whomever will try to turn someone whom the Taliban would find plausible—such as the Australian who’d already tried once to join them. But how could they reach him? What is their insight into his personality, or they’re lever to pressure him? Yup—those recorded, intimate conversations. Knowledge that "what we did was evil and cursed and may allah swt MOST merciful forgive us for giving in to our nafs [desires]" might be a very good lever indeed.

Is that particular scenario realistic? It probably wouldn’t happen, if for no other reason than that that man might never try again. Intelligence agencies, though, are always playing low probabilities, because there are no high ones in their world. That file may not prove useful, but one their others just might—or so they hope.

Is such behavior by the spooks immoral? Call intelligence agencies amoral; more precisely, they subscribe to a different moral code. Espionage, it turns out, is not against international law. Spying, by all nations, always has been like this and probably always will be. Telling a major country to give up spying is like telling a lion to become a vegetarian. It just won’t happen, until well after there’s a sustained outbreak of world peace. That doesn’t mean people shouldn’t try to restrict it, but such efforts are not likely to succeed.

The NSA stories raise other issues, including retention of their data on Americans, but that’s a subject for a different essay.

Tags: NSA CIA

What Should PGP Look Like?

22 July 2014

Those who care about security and usability—that is, those who care about security in the real world—have long known that PGP isn’t usable by most people. It’s not just a lack of user-friendliness, it’s downright user hostile. Nor is modern professional crypto any better. What should be done? How should crypto in general, and PGP in particular, appear to the user? I don’t claim to know, but let me pose a few questions. These are conceptual questions, but until they’re answered those who really understand user interfaces can’t begin to build a suitable solution.

There are a few assumptions I want to start with. First, for the foreseeable future, there will be a mix of secure—encrypted and/or digitally signed—and insecure email. It can’t be otherwise; the net is too large to flash-cut anything.

Second, even individuals who sometimes use crypto won’t necessarily have it available all the time. They may be using a machine without their keys, or without the necessary software, or they may be temporarily using a web mailer.

Third, our end systems are not as secure as we’d like.

Fourth, certain concepts (certificates, key fingerprints, web of trust, etc.) are far too geeky and must be hidden. By "geeky" I mean that the concepts are quite unfamiliar to most people, and unless and until we can find an analogy that fits people’s mental models they have to be hidden.

Should users request security?
In an ideal world, all email would be secure. We’re not in such a world, and per my first assumption we’re not going to be for a very long time, Furthermore, and per my second assumption, even PGP users can’t always receive encrypted email. Should senders be forced to request encryption explicitly? Should they be able to turn it off if it’s on by default? What about email to a group of recipients, some of whom can receive PGP-protected email and some of whom cannot? How should that be indicated to the sender? I could make a very good case that this situation shouldn’t be allowed—but I could make an equally good case that it should.
How should encrypted email be indicated to the recipient?
If I have reason to think that someone hostile is watching my email or my correspondents’, I would not expect certain things to be said in unprotected email—and if they are said, I might be suspicious about what’s really going on.
How should signed email be indicated to the recipient?
Digitally-signed email has a higher degree of assurance of who sent it. Not certainty by any means, but higher and perhaps considerably higher. How should this distinction be shown? This is more or less the same problem as indicating an encrypted web site or distinguishing a phishing site from the real one: you’re adding a new indicator that people aren’t accustomed to looking for. In fact, it may be worse. The real sites for many banks are always encrypted, but per my assumptions a lot of email will be in the clear even if from folks who sometimes use PGP. (Having some sort of near-forgeable “seal” could create its own problems: attackers will spoof it, and users may have more trust than they should. Users want email that’s really from the bank”, not email that’s signed by someone random, possibly with the bank’s logo.
How do we protect private decryption keys?
In an ideal world, my decryption key could sit in my mailer, which would quietly decrypt anything sent to me. Mailers, though, are huge, complex, ungainly things; should we trust them with keys when they’re not needed? Also, having to supply a key is a sure sign that I’ve just received some encrypted email—but most people won’t remember that not supplying the key means the email wasn’t secure. This is especially true if the mailer caches the decrypted private key during an email conversation. Should we use external hardware? Apart from the fact that some interesting platforms (e.g., Apple’s smartphones and tablets) don’t have useful external ports, the insecure host hypothesis means that malware could be feeding encrypted emails into this outboard hardware and silently sending the cleartext back to an adversary.

Oh yes—will most users choose a key-protecting passphrase of "123456"? Experience suggests yes. Get rid of passphrases? Sure—but what do we replace them with? Two-factor authentication? Many tokens have their own usability challenges, even if users don’t have to supply passphrases, fingerprints, DNA samples, or worse. A fingerprint reader, as is present on recent iPhones (and on some laptops going back a fair number of years) for key unlock assumes that the device is secure (per assumption, hosts aren’t); besides, it’s awfully hard to convert a biometric into a key-encrypting key. (Yes, there have been some papers on the subject. It’s still hard, and I’m not convinced it’s been done securely enough.)

How do we protect private signing keys?
Similar considerations apply to signing keys. In fact, they’re worse; I only need my decryption key when I receive encrypted email (which by hypothesis will be rather unusual), but I may want to sign everything I send as a way to help bootstrap crytpo.

The difficulty of protecting my private keys is why I personally don’t sign all of my outbound email.

What should key exchange look like?
In order to send secure email to someone, your mailer has to have access to their public key; to sign the email, they have to have access to your key. These bindings have to be (adequately) secure. How should this be done? The "official" way, with certificates, fingerprints, and the web of trust, is unacceptably complex. Is there a good analogy that is also accpetably secure?
How should exceptions be handled?
How do we handle unusual conditions, such as key change? Key revocation?
What is our threat model?
Who is the enemy? A sibling? A suspicious spouse? An employer? Criminal hackers? Hackers with government backing? A law enforcement agency with a stack of subpoenas and court orders? A major intelligence agency?

This matters a lot. There are relatively simple solutions to some of the key-handling problems for the lower threat models: provider-stored, self-signed certificates, key continuity, key-caching based on all previous emails, and more. These strategies are not very useful against, say, the NSA or the PLA’s equivalent, but the loudest calls for ubiquitous encryption are from people who are worried about just such threats.

The thing that distinguishes encryption from just about any other user interface question is that by definition, here we have an enemy. (Phishing? If we had usable crypto, phishing wouldn’t be a problem…)

I have some tentative answers to some of these questions, but mostly for lower threat models. Is that good enough? Is it worth the effort?