29 July 2008
The traditional argument against file-sharing, as often expressed by the RIAA and the MPAA, is simple: people are obtaining (copyrighted) content without paying for it; this deprives the creators of the revenues to which they're entitled, both by law and a respect for property. Put that way, it seems simple and obviously right; the arguments against them tend to be about availability, price, and the like, or perhaps a rant against pre-Internet business models. All that said, sometimes there's more going on.
A recent news story tells a different tale. Warner Brothers fought mightily to keep The Dark Knight from appearing on file-sharing networks before the official release date. What mattered was not so much the appearance of the film, but the timing. Why?
Timing, it turns out, is crucial to studio profits. For good movies, an official release ensures maximum curiousity and hence attendance; for a bad movie, the official opening without precursors ensures that people will not have heard the negative buzz before buying tickets.
This is not new, of course. Studios have long manipulated release dates, viewings by critics (never see a movie where there were no pre-release showings for the media), foreign release dates, etc. That last week I could go to Belgium for the PET Symposium and see ad posters for Le Chevalier Noir is no mark of new-found egalitarianism by Warner Brothers; rather, it reflects careful calculation that this was the way to maximize profits. But the buzz is crucial:
Studios fear a reprise of the "Hulk" piracy debacle. A rough, early version of Ang Lee's 2003 summer movie made its way to the Internet two weeks before the film's scheduled premiere, provoking negative reactions from the comic-book film's devoted fans, whose opinion carries far more weight in determining the success of this film genre than that of mainstream film critics.
"A lot of people decided not to go near it. Hollywood argued, correctly, that many more people would have gone to see it, had online buzz not been so critical of the movie," said Eric Garland, chief executive of BigChampagne Online Media Measurement, which monitors file-sharing networks and is a consultant to the entertainmentindustry.
The argument about file-sharing in this case is no longer about the lost revenue from those who would otherwise have paid to see the movie. Rather, it's about controlling dissent: they don't want people who didn't like the movie to say so publicly, before lots of people pay for the privilege of seeing a bad movie.
The -sharing debate is much more nuanced in this situation. While property owners have the right to use their property in the way that's most profitable for them, it's no longer a question of consumption without compensation. Rather, it's a question of controlling information flow, and that's a different kettle of fish entirely. One gets the feeling that if it were legal and practical, the studios would limit unfavorable online discussion of their movies — remember the "veggie libel laws sponsored by the food industry or SLAPP? Fortunately, the American tradition and legal system are hostile to such things.
The lesson here is that one should look beneath the covers of all arguments about the harm done by file-sharing. The traditional claim has been that each downloaded file represents a lost retail sale. That claim is false in both directions. Some people would never have purchased the product (and hence represent no loss of revenue); in other situations, single copies in the "wrong" hands will represent a greater loss of revenue, but only because of information flow. In this situation, downloads in the absence of blogs and the like would have very little effect. The real issue, then, is this: should the studios have a monopoly on market perceptions? Worse yet, should the government, by means of copyright law, help enforce this monopoly? In ACLU v. Reno (96-963), Judge Dalzell wrote
It is no exaggeration to conclude that the Internet has achieved, and continues to achieve, the most participatory marketplace of mass speech that this country -- and indeed the world -- has yet seen.Copyright law is no excuse for reversing that.
24 July 2008
In a recent campaign appearance, Barack Obama made a number of proposals regarding "cyberterrorism". (Eugene Spafford was at the speech and blogged about it; be sure to read his description. You can find the text of Obama's speech here and a fact sheet with more details here.) I'm glad to hear that Obama is taking cybersecurity seriously (and I'll be glad to post a similar note if I see similar news stories about John McCain), but I fear he may be barking up the wrong tree. I can summarize my concerns in four points: the issue is cybersecurity, not cyberterrorism; there are no magic bullets; execution and policy matter a lot; and (of course) we need to do more research. I'll discuss each of these in turn.
Cyberterrorism versus CybersecurityObama spoke specifically about "cyberterrorism" — the risk that terrorists might use cybercapabilities to attack U.S. interests. The the problem, though, is that this focus characterizes the threat too narrowly. The Internet Security Glossary gives this explanation (among others) for "threat"
To be likely to launch an attack, an adversary must have (a) a motive to attack, (b) a method or technical ability to make the attack, and (c) an opportunity to appropriately access the targeted system.This is the classic trinity known to all mystery fans: motive, means, and opportunity. In principle, defenders can use any one of the three to foil an attack.
Motive is the hardest one for an outsider to assess. At best,it's a matter of delicate intelligence assessments about what an enemy plans to do. There have been many news stories about, say, Chinese government-sponsored hacking; there have been many fewer articles about al Qaeda's cyber plans. Perhaps there have been fewer leaks; perhaps there has been less information to leak. Regardless, it seems clear that there has been serious activity by nation-states.
Of course, the flip side is that everyone — the U.S., other countries, and the terrorists — uses the Internet. There has been a lot of speculation that the Internet is too useful to the bad guys as a communications system for them to want to damage it. There is an irony here: the less they fear to use the Internet, the more likely it is that they won't want to risk loss of their own access by launching a cyberattack. If there is too much U.S. government monitoring of Internet communications, in the hope of catching terrorists, the less reason they'll have to refrain from attacking.
When it comes to means, the situation is considerably bleaker. Lots of people can launch cyberattacks; many of them are mercenary and sell exploits to the highest bidder. They don't care if the buyer is a government, a terrorist group, an extortionist, a credit card number thief, or a spammer; what counts is profit. While we can safely assume that nation-states have very great capabilities, both they and the terrorists can easily purchase capabilities they don't have. The publicly-known capabilities of the bad-guy hackers are demonstrably enough to do great damage. (It is worth noting that even ordinary attacks can affect the sorts of infrastructure targets that cyberterrorists may go after.)
Opportunity — for our purposes, that is the remaining security holes that exist in our systems — is the most promising avenue for the defenders, since we can to some extent control it. We have little control over whether or not someone can attack us, and exploits are much more easily distributed and obtained than, say, highly enriched uranium. But we can (to some extent) plug our own holes. This, then, has to be the focus of our work: defending our systems, regardless of who the attacker is.
Some will object that cyberterrorists and nation-states have greater capabilities than commercial attackers. While arguably true, it's irrelevant: we aren't even doing an adequate job defending against the "easy" attacks. And these attacks are devastating; TJX alone lost more than US$250 million to one group of attackers.
Focusing on generic cybsecurity will help against real, serious vulnerabilities without needing to speculate on enemy intentions or capabilities. That is, the same cybersecurity efforts we need to defend against cybercriminals defend against cyberterrorists.
No Magic BulletIt is very important that our next president recognize that there is no magic bullet that will solve the cybersecurity problem. Most security problems are due to buggy code; I regard buggy code as the oldest unsolved problem in computer science, and I do not anticipate a solution any time soon. More than 20 years ago, Fred Brooks wrote a classic essay "No Silver Bullet" (a copy appears to be here). In it, he noted that
I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared with the conceptual errors in most systems.The same is true for cybersecurity (a subset of the reliability issues Brooks was talking about).
If this is true, building software will always be hard. There is inherently no silver bullet.
By the same token, a Manhattan Project-type effort won't work. We don't know how to produce secure, bug-free code; we don't even know if it's possible for human programmers to do it. We're not dealing with the laws of physics, which very clearly do permit at least some chain reactions; we're dealing with the limitations of the human brain. We've also seen many failed attempts at panaceas. Throwing a large pile of money at the problem will not magically cause a solution to appear.
Execution and Policy MatterFor all my pessimism about complete solutions, we can certainly do a lot better than we're doing today. Some of the advice is mundane and familiar: patch your systems. (Other conventional advice, such as "pick strong passwords", is at best overblown and arguably harmful.) Other aspects are more difficult: proper system design counts for a lot. A cybersecurity czar with both a bully pulpit and some regulatory authority might accomplish a lot. To give just one example, many banks have insecure web sites. Are government regulations the cure, or at least part of it?
Some will argue that market mechanisms will solve the problem. Companies with poor security practices — again, think TJX — will pay the price. Unfortunately, there are serious market failures, such as end-user license agreements that shield some actors from liability. Similarly, consumers have little knowledge of (and often little choice about) software choices and their security implications. Perhaps liability and its corollary, insurance, are part of the solution. One important role for a cybersecurity czar is to develop a comprehensive set of policies (including proposed new laws and regulations) that will let the market function. There will be — there must be — a lot of debate over the issues. To give just one example, what will be the effects of liability (let alone strict liability) on open source software development and distribution? These questions do not have obvious answers, but it will be easier to discuss the questions in the context of a comprehensive solution.
More ResearchI'm an academic, so of course I'm calling for more research. The argument, though, is simple: we don't know how to solve the problem. While I don't think we'll ever have perfect solutions, are there unknown techniques that could help? We don't even know if firewalls are useful or not.
There is a lot of room for more research. I served on a recent National Academies study committee that outlined some important research issues; I'll only mention two here. First, if some level of insecurity is inevitable (and I think it is), how do we minimize the damage? Second, there is a need for long-term, sustained effort; short-horizon programs won't produce fundamental break-throughs. There have been press reports that DARPA has moved away from such a focus (note: this is not a conclusion I'm attributing to the committee).
ConclusionsThere is definitely a cybersecurity problem, and there is a lot a president can do to help solve it. It is much less clear that there is is a cyberterrorism problem; however, dealing with the more mundane issues will help defend us against such threats if they do exist.
10 July 2008
There's been a lot of attention paid recently to the issue of laptop searches at borders, including a congressional hearing and a New York Times editorial. I've seen articles with advice on how to protect your data under such circumstances; generally speaking, the advice boils down to "delete what you can, encrypt the rest, hope that Customs officials don't compel production of your key, and securely clean up the deleted files". If you need sensitive information while you're traveling, the usual suggestion is to download it over a secure connection, per the EFF:
Another option is to bring a clean laptop and get the information you need over the internet once you arrive at your destination, send your work product back, and then delete the data before returning to the United States. Historically, the Foreign Intelligence Surveillance Act (FISA) generally prohibited warrantless interception of this information exchange. However, the Protect America Act amended FISA so that surveillance of people reasonably believed to be located outside the United States no longer requires a warrant. Your email or telnet session can now be intercepted without a warrant. If all you are concerned about is keeping border agents from rummaging through your revealing vacation photos, you may not care. If you are dealing with trade secrets or confidential client data, an encrypted VPN is a better solution.But is it?
When a laptop is searched, the customs agents are not looking for drugs embedded in the batteries or for whether or not the connectors have too much gold on the contacts. Rather, they're looking for information.
In that sense, it would seem to make little difference if the information is "imported" into the US via a physical laptop or via a VPN, or for that matter by a web connection. The right to search a laptop for information, then, is equivalent to the right to tap any and all international connections, without a warrant or probable cause. (More precisely, one always has a constitutional protection against "unreasonable" search and seizure; the issue is what the definition of "unreasonable" is.)
According to an analysis of the revisions to FISA, "the bill affirmatively permits electronic surveillance without any warrant for Americans' international communications when the NSA is "targeting" a foreigner or group abroad." By contrast, a border search targets a particular individual entering the country, rather than some foreign group. But if warrantless searches for information are legal, does this provide a non-statutory extension to FISA, a way to justify warrantless wiretapping beyond what FISA already permits?
It gets worse. In general, one has no right to hide contraband from customs agents when crossing the border. Is hiding imported information — that is to say, using an encrypted international connection — improper? Put another way, is using encryption on an international connection the equivalent of hiding physical objects in a false-bottomed suitcase? If so, it is saying that the government must have access to all keys, a notion that was quite thoroughly discredited (and rejected by the American public) during the debate over the Clipper chip in the 1990s.
What about a court order compelling disclosure of a key or passphrase? The legal situation is quite unclear. In in re Boucher, a judge ruled that the Fifth Amendment protection against self-incrimination could be used to deflect a request for a passphrase. While the judge's reasoning is suspect based on the facts of this particular case, his reasoning struck me as sound. Also note that Boucher was a criminal case; the situation in a civil case is more less clear, since the protection against self-incrimination would not apply.
There's one more philosophical point to consider. Restricting the public's access to "foreign" information is antithetical to the basic principles of the First Amendment, and of Freedom of Thought. Trying to restrict access to information in this way is the moral equivalent of the practice of denying visitor visas to Communists (imagined or real) during the 1950s.
Perhaps I'm carrying my arguments too far. Perhaps the slope isn't as slippery as I've portrayed it, though Kerr seems to agree that the Constitution would permit warrantless searches of international connections. At the least, we need a clear statement of what the rules are for government access to imported information, whether carried in physically or electronically.