September 2014
The U2 Incident (15 September 2014)
Apple's "Warrant-Proof" Encryption (23 September 2014)

The U2 Incident

15 September 2014

There’s been a bit of an uproar of late about Apple giving away—more precisely, installing in everyone’s iTunes libraries—a copy of a new U2 album. This has made many people very uncomfortable, so much so that Apple had to create a special removal tool. Why is this so upsetting? Would people have been so upset if Apple had simply mailed them a CD? I doubt it—but why is this different?

We often have an uneasy relationship with our gadgets because we don’t really feel that we control them. Often, they do something completely unexpected. Of course, that’s generally due to buggy code or to user misunderstanding of what should happen, rather than anything more sinister. Sometimes, though, we’re reminded that the vendor still has certain powers. This is one such time.

Generally, we purchase our toys. I own my iPhone; no one else has any rights to do anything to it. That’s even a provision of Federal law, which criminalizes access "without authorization or exceeding authorized access". We don’t buy software or services, though; rather, we license them, and we rarely understand the precise license terms. In this case, Apple exercised certain rights not over our devices directly, but over our iTunes accounts; this in turn caused the album to be downloaded to our devices, whether or not we wanted it. (Aside: I didn’t see any relevant clauses, pro or con, in the iTunes Terms and Conditions page.) This was, to say the least, surprising.

The problem is that by doing this, Apple has violated our mental model of our personal space, or perhaps our personal cyberspace. Our iTunes library is our iTunes library; it feels wrong for someone else to mess with it. This violates what I’ve termed the Technical-Social Contract. I noted seven years ago that

Fundamentally, these incidents are all the same: people had a mental (and sometimes legal) model of what was "normal" and possible; technology changed, and one party’s behavior changed with it, to the shock of the other.
That’s what has happened here: Apple has surprised people with its ability to control "our" space. It’s not as nasty or as unpleasant as when Amazon deleted 1984 from someone’s Kindle, but the unease stems from the same source: the company has power over "our" content. We understand physical junk mail, even if it’s an unwanted CD. This, though, feels more like somone walking into our houses and putting a new CD on our shelves. If Apple had merely mailed out a URL—"click here to get a new, free album!"—no one would have been upset. That isn’t what they did.

Apple's "Warrant-Proof" Encryption

23 September 2014

Apple has recently upgraded iOS encryption to eliminate the "back door" for warrants. That is, the new system is designed so that even if police present a valid warrant, Apple will no longer have the technical ability to decrypt the contents of an iPhone or iPad. Not surprisingly, there’s been some unhappiness about it. Orin Kerr, for example, titled his first blog post on the subject "Apple’s Dangerous Game" and stated that he found "Apple’s new design very troubling." In a second post, he acknowledged that the buggy software problem is indeed an issue; in a third, he asked two questions that basically boil down to "where and how should the line be drawn, under two different analyses?" It’s a fair question, though I’ll get ahead of my analysis by stating that I still think that Apple’s move was a good one.

Orin’s first basis for analysis is simple to state:

So here’s the question: In your view, can there ever a point when there can be too much encryption—and if so, what is that point?
He goes on to point out the many serious crimes for which a seized and decrypted iToy can yield evidence. Point conceded: there are such crimes, and there is such evidence. However, there’s another side to the coin: how many serious crimes are prevented by this sort of encryption, and in particular by the new variant with no back door? I won’t bother discussing whether or not encryption is good; even Orin’s first post noted that "cryptography protects our data from hackers, trespassers, and all sorts of wrongdoers…. [including] rogue police officers." The question, then, is whether or not, a warrant-only back door poses a danger. There are several reasons for believing it does.

First, per Orin’s second post (which drew on arguments advanced by others, notably Matt Blaze), the existence of the code to implement this back door is itself a danger. Code is often buggy and insecure; the more code a system has, the less likely it is to be secure. This is an argument that has been made many times in this very context, ranging from debates over the Clipper Chip and key escrow in the 1990s to a recent paper by myself, Matt, Susan Landau, and Sandy Clark. The number of failures in such systems has been considerable; while it is certainly possible to write more secure code, there’s no reason to think that Apple has done so here. (There’s a brand-new report of a serious security hole in iOS.) Writing secure code is hard. The existence of the back door, then, enables certain crimes: computer crimes. Add to that the fact that the new version of iOS will include payment mechanisms and we see the risk of financial crimes as well.

How, though, did Apple decrypt phones? To my knowledge, they’ve never said. A really stupid way would be to use the same unlock code for all phones. That would be horribly dangerous—one leak would expose everyone—and I doubt that Apple did it that way; if they did, the risk is obvious. More likely, there’s a backup key that’s somehow linked to the device serial number or IMEI. In that case, Apple needs some database of master keys—but there’s no reason to think that they can protect it against high-level attackers. Certainly, RSA couldn’t protect its database of keys when they were hacked. Ignoring the technical issue that this is itself a law enforcement violation, it contributes mightily to attackers’ abilities to decrypt phones. (It’s fair to ask if this decryption key is somehow related to the ability to unbrick a phone that’s been locked by the owner after the device has been lost. I don’t know, but there’s some reason to think that a similar code path is involved.)

The most salient point, though, is that the US is not the only country where iToys are sold, nor is our Fourth Amendment standard universally applied. Does Apple have to comply with valid legal process in every jurisdiction where it does business? They almost certainly do, even if the crime being investigated—hate speech, for example—cannot be criminalized under US law. There are also governments that are, shall we say, not reknowned for their adherence to due process. Would Apple have to comply with a Chinese warrant (perhaps for the investigation of a dissident or an adherent to Falun Gong)? China is not only a huge market for Apple; it’s where iPhones are assembled. You probably can’t call compliance with such requests "crimes", but if we’re talking about moral balance (and if one accepts American standards as the norm) this is certainly offensive. This sort of access is also prevented by Apple’s new scheme, and has to be weighed against the crimes that might be solved.

Orin’s second premise is whether the balance of power, between law enforcement’s abilities and individuals’ privacy rights, should now shift towards the former to compensate for Apple’s change, and if so, how? I’m not sure I buy the premise; in particular, I’m not sure that I agree that this is the first shift, as opposed to the countershift to restore privacy rights.

As the Supreme Court noted in Riley, "Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans ’the privacies of life.’" The ability, for the last seven or so years, for the police to access this treasure trove of information is, to me, a considerable shift of the balance, without a corresponding shift toward privacy. Arguably, phones pose even more of a challenge to the particularity requirment of the Fourth Amendment than do computers, and searches of computers are themselves controversial. (Orin, of coursee, has written a great deal on that question.)

In a more practical vein, there is little on most phones that does not exist in other places, notably Apple’s iCloud and/or the computer used for iTunes backups. (Oddly enough, this came up in my class today: I explained that wiping a phone because of a lost password was a reasonable response; the legitimate owner could restore most of its contents with comparatively little effort, while the data on it was of great value to attackers.) Location history may be the exception—but that’s data that police never had access to before the advent of smart phones. If everything were strongly encrypted—all hard drives, cloud backups, and so on—there might be a different balance (though I suspect not); however, we are very far from that state. In particular, I don’t think that we’ll ever be there for most people, precisely because of the recovery problem. (If you turn on full disk encryption (FDE) on Macs, Apple will create a "recovery key"—mine is on a piece of paper in my house—and the system offers other recovery alternatives as well. I’m very curious how many people use these other schemes, such as recovery via an AppleID.) Add to that other shifts in the balance in recent years, such as the ability to do real-time location tracking, legally hacking into computers to do surreptitious searches, analysis of the kinds of metadata that are available today, and more, and it’s very hard to argue that the balance has shifted towards privacy at all. In fact, law enforcement’s reluctance even to allow discussion of their newer investigative techniques—we don’t know how Scarfo’s password was captured, let alone how, when, and by whom IMSI catchers (“Stingrays”) are used—has made for a very scanty basis on which the courts can construct case law.

I’m probably not qualified to comment on the overall merits of Orin’s philosophical point, that equilibrium-adjustment is the proper basis for understanding the evolution of Fourth Amendment law. Assuming that it’s valid, though, I don’t think that the facts here support his argument—quite the contrary. The only difference is that this adjustment is technical rather than legal.

The issue of the risks and benefit of unrestricted cryptography have been debated for more than 20 years, including especially the so-called "Crypto Wars" of the 1990s. The best comprehensive exploration of the issues is a National Academies study CRISIS: Cryptography’s Role in Securing the Information Society. I won’t try to recapitulate the whole report here, but one item is worth quoting in full:

Recommendation 1—No law should bar the manufacture, sale, or use of any form of encryption within the United States. Specifically, a legislative ban on the use of unescrowed encryption would raise both technical and legal or constitutional issues. Technically, many methods are available to circumvent such a ban; legally, constitutional issues, especially those related to free speech, would be almost certain to arise, issues that are not trivial to resolve. Recommendation 1 is made to reinforce this particular aspect of the Administration’s cryptography policy.

There’s a lot more, on both sides of the issue (the full report, with appendices, is over 700 pages), but the recommendation I quote is quite clear: there should be no legal restrictions on cryptography within the US.