3 July 2007
Tennessee has a new law requiring age verification on all purchases of beer. Nominally, this is an attempt to prevent underage drinking. I'll leave to others to discuss whether or not it's likely to be effective, or why the law applies only to beer and not to wine or hard liquor. (I was tempted to muse on the effects of generalizing this idea on a famous local product — Jack Daniels Tennessee whisky — but to do that properly I'd have to link to their web site, and that is allegedly difficult. It seems that Jack Daniels' has a "Linking Agreement". Is that enforceable? I may comment on that another time.)
There is, however, a privacy issue. Note this portion of the cited article:
Richard Rollins, who owns a convenience store in Nashville, is already using a computerized scanner to check everyones drivers licenses when they buy beer. "We just say were trying to keep our beer permit, and this is the safest way," Rollins said.That's right — stores and bars can and do record various personal details from your driver's license. The purpose of the check is to prove your age; the alleged purpose of the scanner is to detect counterfeit licenses. The trouble is that it does more, and most people don't realize that.
But it has stopped Jeff Campbell from shopping at Rollins market.
"I dont mind them asking for my ID, but they dont need my drivers license number," said Campbell, 43. "Im just buying a six-pack. All they need to know is how old I am."
Rollins said scanning licenses has proved beneficial in other ways, such as catching criminals.
When one customer tried to make a purchase using a counterfeit bill, Rollins said police were able to track him down because the receipt from the scanner showed his name and license number _ and his address.
Privacy has been defined as "the right of an entity (normally a person), acting in its own behalf, to determine the degree to which it will interact with its environment, including the degree to which the entity is willing to share information about itself with others." Many uses of the scanners are intended to collect customer information; see, for example, CardVisor II, IdVisor, and more that I didn't find in two minutes of searching the net.
The privacy threat isn't new. However, this new law is likely to increase the usage of such scanners, and with that the threat to privacy will increase.
5 July 2007
A Belgian court has ruled that ISPs are legally responsible if their users engage in illegal file-sharing. It has given an ISP (Scarlet Extended SA) six months to block it. Details are here. Part of their rationale: an expert appointed by the court concluded that there were at least seven different technologies ISPs could use to block such traffic. Only one — Audible Magic's fingerprinting scheme — was identified in the article.
To me, this ruling is both preposterous and dangerous. It's preposterous because it makes too many assumptions about how ISPs run their networks and what happens over them. It's dangerous because it reverses the fundamental equation of the Internet: that end-points have the right and the responsibility to send and receive proper traffic, while the network is simply supposed to carry the bits. Carried to extremes, the court's attitude would require would-be innovators to seek permission from ISPs before introducing a new service, lest the service enable potentially-illegal behavior that the ISP didn't yet know how to monitor.
The law in the US is pretty clear right now that ISPs are not liable for such enforcement. This ruling, though, was based on the Belgian court's interpretation of EU law. If other European courts follow suit, the result could be very bad indeed for the net — or for Europe…
6 July 2007
There's a fascinating new IEEE Spectrum article by Vassilis Prevelakis and Diomidis Spinellis about the Greek cellphone tapping incident. In this incident, someone — just who remains unknown — inserted some code in some phone switches to abuse the built-in wiretap facilities to eavesdrop on calls. Over 100 people's lines were monitored, up to and including the prime minister.
There are two important lessons to be drawn from this incident. First, logging and process are very important. Everyone involved in system design or operation should pay attention to that portion of the article. I say "everyone" and not "all security people" because the logs in question are not necessarily intended for security purposes.
The second lesson, of course, is that built-in wiretap facilities and the like are really dangerous, and are easily abused. See, for example, Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP, by myself, Blaze, Brickell, Brooks, Cerf, Diffie, Landau, Peterson, and Treichler; The Real National-Security Needs for VoIP, by me, Blaze, and Landau; Comments on the Carnivore System Technical Review, by me, Blaze, Farber, Neumann, and Spafford; The RISKS of Key Recovery, Key Escrow, and Trusted Third-Party Encryption by Abelson, Anderson, me, Benaloh, Blaze, Diffie, Gilmore, Neumann, Rivest, Schiller, and Schneier; CERT® Advisory CA-2000-18: PGP May Encrypt Data With Unauthorized ADKs; and many more.
Update: Matt Blaze has also blogged about this article.
7 July 2007
The 9th Circuit Court of Appeals has just issued a dangerous opinion in United States v. Forrester on the applicability of "pen registers" to the Internet. In doing so, they ignored important technical issues that go to the heart of what makes the Internet different from the phone network.
A pen register is a device that records what phone numbers someone dials. (A close cousin, the trap-and-trace device, records what phone numbers dial a particular numer.) The criteria for law enforcement use of either are spelled out in 18 USC 3121-3127. The crucial constitutional element in these statutes is that a search warrant, which must be supported by "probable cause", is not required. Instead, all that's needed is "a certification by the applicant that the information likely to be obtained is relevant to an ongoing criminal investigation".
This procedure was justified by Smith v. Maryland, 442 U.S. 735 (1979). In it, the Supreme Court ruled that phone numbers were voluntarily given to a third party — the phone company — and that the caller thus had no legitimate expectation of privacy. It noted that
Petitioner concedes that if he had placed his calls through an operator, he could claim no legitimate expectation of privacy. We are not inclined to hold that a different constitutional result is required because the telephone company has decided to automate.The court also noted that people realize that the phone company can record such information:
All subscribers realize, moreover, that the phone company has facilities for making permanent records of the numbers they dial, for they see a list of their long-distance (toll) calls on their monthly bills. In fact, pen registers and similar devices are routinely used by telephone companies "for the purposes of checking billing operations, detecting fraud, and preventing violations of law." Electronic equipment is used not only to keep billing records of toll calls, but also "to keep a record of all calls dialed from a telephone which is subject to a special rate structure." Pen registers are regularly employed "to determine whether a home phone is being used to conduct a business, to check for a defective dial, or to check for overbilling."But does any of this apply to IP addresses, email addresses, and URLs? Individuals do not see bills for email or packets sent. The overabundance of spam would tend to suggest that no one is checking for Internet fraud or violations of the law, let alone by something like a pen register. In short, ordinary uses do not have the same awareness that they arguably do for phone numbers.
Beyond that, there is a more important distinction. A crucial part of the court's reasoning in Smith was that phone numbers are "given" to the phone company:
This Court consistently has held that a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.Are IP addresses, email addresses, or URLs given to a third party? It is in this respect that the court missed some subtle technical points. IP addresses are clearly given to the ISP; there's no other way for the packets to reach their destination. But what of email addresses? For most consumers, destination email addresses are indeed given to their ISP; specifically, they are sent to the ISP's SMTP relay server, and that machine does the actual email delivery. Similarly, inbound email is generally retrieved from the ISP's mail server. While it is unclear to what extent consumers understand the detailed configurations involved, the fact that consumer email addresses tend to have the ISP's domain name makes this a plausible argument. In this respect, the reasoning in Smith would seem to apply. However, the Internet is not the phone network.
The essence of the Internet is end-to-end communications, where the ISP is not involved except as a passive carrier of the traffic. There is nothing preventing someone from sending email traffic directly from his or her originating machine to the recipient's receiving machine. No third parties — a crucial part of the Supreme Court's reasoning — need be involved. While admittedly that would be an unusual situation for consumers, it is quite common for businesses. (Even some individuals have such setups; indeed, some email I sent to a friend discussing this case followed exactly such a path: from my machines directly to my friend's, with no ISP servers involved.) The opinion in this case ignores the distinction; the factual record as presented here does not state whether or not ISPs were involved. In that way, it sets a dangerous precedent.
There is one bright note in the court's opinion. A footnote noted that
Surveillance techniques that enable the government to determine not only the IP addresses that a person accesses but also the uniform resource locators ("URL") of the pages visited might be more constitutionally problematic. A URL, unlike an IP address, identifies the particular document within a website that a person views and thus reveals much more information about the person's Internet activity. For instance, a surveillance technique that captures IP addresses would show only that a person visited the New York Times' website at http://www.nytimes.com, whereas a technique that captures URLs would also divulge the particular articles the person viewed.The pen register statute specifically bars interception of "content"; "content", according to the statute
includes any information concerning the substance, purport, or meaning of that communication.Does the non-host part of a URL qualify? I think so, and the judges in this case seem to think so, but it's never been tested in court.
13 July 2007
There's an AP wire story about some of the problems and issues some people have with Windows Vista. Most of the problems are — and were — quite predictable. Some applications don't work with it, device drivers haven't always been updated, etc. This sort of issue is neither new nor surprising.
In a similar vein, the article discusses business' hesitance to upgrade yet:
Before the business version of Vista landed late last year, a Forrester survey of about 1,600 companies found that 31 percent planned to upgrade within a year, and 22 percent more planned to be running it within two years.That section conflates two different issues, application compatibility and OS bugginess. Again, this isn't new. Sometimes, applications rely on things they shouldn't have relied on; in this case, that was often the insecurity of older operating systems. In fact, some researchers developed a special tool to help figure out why applications need too many privileges. Naturally, businesses want to wait until their crucial applications work before they upgrade. Bugginess isn't new, either; I learned more than 35 years ago to "never install .0 of anything".
Most businesses think those plans now seem too aggressive, said Forrester analyst Benjamin Gray.
While corporate technology departments are looking forward to some of Vista's security features and easier administration tools, there's little reason to switch if the more secure PCs end up choking on a critical piece of software.
"They're waiting for Microsoft to bless it with a service pack," said Gray, referring to a major software update that fixes bugs.
To me, though, the interesting part of the story concerns the usability of the new security features:
One of the most common annoyances: Microsoft's user account control feature, designed to protect unwitting Web surfers from spyware and viruses that would otherwise install themselves on the computer.Anyone who has used Vista knows exactly what he's talking about: there are lots of pop-ups, asking you to confirm that you really intended some particular action. Going forward, I don't think this is a viable approach; people won't put up with it. But is there an alternative?
Dan Cohen, chief executive officer of Silicon Valley startup Pageflakes, bought a Vista laptop a couple of months ago. After one too many pop-up windows warning of possible threats from the Internet, Cohen switched the control feature off.
Now he gets pop-ups warning him that turning off UAC is dangerous.
"I feel more secure — and more irritated," he said.
Some of the pop-ups are occur when you try to do something that's inherently privileged, such as installing a new device driver. Unix and Linux users have long been familiar with this, and they know to use su or sudo prior to issuing such commands. Other things, though, are extra protections against what had been unprivileged operations, such as specifying that certain programs should be executed at log-in time. On Windows, this could be done simply by copying the program to the Start Up folder; on Unix and Linux, it simply involves editing .profile or the like. However, this is now an operation that requires confirmation — and rightly so, because viruses and worms have long exploited this feature (or the registry-based equivalent) in order to restart at reboot time.
But there's a major downside: we're in a situation where users are routinely prompted to click "Yes", often to deal with threats they don't understand. You have to click "Yes" to protecting yourself, but if people are habituated to click "Yes" frequently, the protective effect of Windows Vista will be gutted.
In fact, it's worse than that. Many new machines come preloaded with what Walter Mossberg of the Wall Street Journal calls "craplets". These are trial and advertising applications, and it isn't always obvious how to make them go away. For example, on my new Vista/Ubuntu box, I had to delete some vendor-supplied desktop clutter asking me to subscribe to some photo site. (Perhaps I'll name the vendor in another post…) A few days later, though, I received a pop-up asking me again to subscribe. And shortly after that, I was asked to permit some vendor-supplied application to update itself. Should I do this?
The Vista pop-ups warn you not to say "yes" unless it's an action you initiated. Fair enough, and I assume I'd see the Vista pop-up if I permitted the vendor software upgrade to continue. But where did that come from? Is it really software that was on my machine, out of the box, and hence no less (and no more) trustworthy than the rest of the preloaded software? Not that that's a ringing endorsement — I may be letting myself in for a new round of solicitations for photo sites or auction sites or the rest of the annoyances I hope I've deleted — but what if the update request was from piece of malware I'd somehow acquired? How do I know what I'm consenting to?
Microsoft is in a very difficult position here. Explicit permission before dangerous operations are performed is clearly necessary. On the other hand, the frequency of the permission requests and the lack of clarity about what will actually be done pose a usability (and hence a security) challenge. Consider this request for permission when I asked Windows to delete a program:
Anyone want to tell me exactly what it is I'm being asked to consent to? What program is being uninstalled or changed (and which is it)?
Mind you, I'm not blaming Microsoft. While some of the security usability woes of Vista are undoubtedly due to the need for backwards compatibility with their older, horribly insecure operating systems, others — like this example — are inherent in the problem. The real question is what to do. As I've often remarked, if we knew the answer it wouldn't be research.
13 July 2007
I keep a few "fidget toys" on a table in my office — they're great fun to play with during meetings. My daughter just sent me an article on one of my (and my visitors') favorites, the "rattleback".
19 July 2007
Some researchers have "solved" checkers. That is, they have a program and database that will win or draw any game, playing either color.
They worked by building an end-game database of all positions with 10 or fewer pieces. It doesn't matter how you get to that level; once you're there, the database can guarantee a favorable result.
Update: see the excellent IEEE Spectrum article on the program.
20 July 2007
There's an interesting New York Times article on the use of cell phone tracking data for criminal prosecutions. It's a classic example of secondary uses of data. Briefly, phone companies keep several months worth of tracking data: which cell sites talked to which phones, and when. This data can be subpoenaed by prosecutors and used as evidence in criminal cases. (Oddly enough, the story was in the New York section of the paper, not the national or technology sections.)
There are a variety of legal issues about the validity of such evidence that I'm not going to discuss. These include accuracy (did you know that during busy periods, your call may be handed off to a more distant cell site? I didn't.), whether the location of the phone corresponds to the location of some particular person, etc. My focus here is on privacy.
First — as I discussed in an post on pen registers, the data is almost certainly available to prosecutors with little trouble. After all, subscribers voluntarily "give" their location to the phone company, and given that location data shows up on phone bills it's hard to argue that people don't know this. It might take specific statutory authority for prosecutors to get this without a subpoena, but such a law would almost certainly pass constitutional scrutiny.
Second, it's not just criminal cases; similar data can be and has been used in things like divorce cases.
The root issue, though, isn't legal. Rather, it's one fundamental to the privacy problem: the secondary use of data. That is, data legitimately and properly collected for one purpose, with the consent of the subject and perhaps for necessary technical reasons (the cellular phone system can't work if the network doesn't know which towers are near which phones), can be retained and used for other purposes. The purpose of cell phone location data is first, to make the network function, and second, for billing records; it is not intended for use by divorce lawyers or prosecutors.
Other policies aren't as attractive from a privacy perspective. One states that "We maintain a database with this location and route information, and may keep such information indefinitely." It goes on to say
We may disclose to unaffiliated third parties without your consent information about you that we collect, including information that we collect through your registration to be a customer, through one of our promotions, or through your request to us or one of our partners for details about our services. Such third parties may use this information (including your name, telephone number, and email and mailing addresses) to promote their products and services to you.
Lots of what we do in a digital world creates data. Curtailing secondary uses is key to maintaining privacy.
23 July 2007
A buffer overflow flaw — a very common programming bug that can have serious security consequences — has been found in the iPhone by Charlie Miller, Jake Honoroff, and Joshua Mason of Independent Security Evaluators (Avi Rubin's company). Yes, it's a security problem; yes, Apple needs to fix it ASAP. A technical description of the problem is here.
It's not the end of the world, though. (More details on my opinion are in the New York Times article.) The I.S.E. FAQ says it best:
Should I turn my iPhone off and lock it in a drawer until Apple fixes this? Not unless you plan to do the same to all the other computers you own. The iPhone is an internet connected device running a relatively full featured software suite: this research shows that it is vulnerable just like many other similarly capable devices, both PCs and embedded systems.In other words, exercise caution, not paranoia.
26 July 2007
For years, some of us have warned about the risks of buggy code in law enforcement software. That is, the target of the investigation will do something to do something nasty to the law enforcement machine, thus evading surveillance or worse. Frequently, this has been in the context of wiretap software; see, for example, Comments on the Carnivore System Technical Review Draft, Tapping, Tapping On My Network Door, and Carnivore and Open Source Software.
We now have some concrete examples in a related field, forensic analysis software. Some researchers have found bugs in forensic software packages. The article quotes one of the researchers: "Basically we can make it impossible to open up a hard drive and look at it." Can it be used to take over the analyst's machine? They're not saying yet if their attacks can be used to take over the analyst's machine.
The societal implications are obvious: defense attorneys are going to have a field day. They're going to quiz analysts — who are not expert on the internals of the packages they're using — about why they think the software is trustworthy. "How do you know your buggy software didn't miss exclupatory evidence?" They're also going to subpoena vendors and vendor data, including source code, change logs, test plans, bug report databases, etc. If the vendor can't or won't produce, they'll try to get any forensic evidence excluded.
We're already starting to see things like this in other criminal cases where crucial evidence depends on software. In at least one drunk driving case, a Florida appeals court ordered a vendor to let a defense expert examine source code to an alcohol breath test machine; if the vendor refused, the breath test evidence would be thrown out. The court's language was very strong: "It seems to us that one should not have privileges and freedom jeopardized by the results of a mystical machine that is immune from discovery."
The obvious counter for law enforcement is to use some form of "certified" software. That is, some outside party would evaluate the software and certify its correctness. Note that this is a very difficult (and expensive) process. Furthermore, for fairness in criminal cases, it probably needs to be an adversarial process. Imagine how this will play out in a high stakes, well-funded white collar criminal case.
Ultimately, of course, this is a question of how to produce and verify high-assurance software. Several decades of work tell us that this is a hard problem.
28 July 2007
Of late, insider attacks have gained a lot of attention. Such attacks are quite pernicuous, not just for the damage they can do — and they can do a lot — but also because of the affect on morale. What should we do about the problem? There are no simple answers.
It should be noted that insider attacks are not one phenomenon. Rather, there are at least three different types of insider attacks. In general, the different types of attack require different defenses.
The first type of attack involves an abuse of discretion rather than any technical failing. That is, the misbehaving insider performs some action he or she is authorized to do, but for the wrong reason. To give on example, I, as a professor, have a fair amount of discretion in submitting grade change forms for past students. If, however, I choose to give good grades to people who pay me well, I would be acting improperly. This is an abuse of authority attack. (There have been recent allegations of just this sort of attack, at Touro College in New York and Diablo Valley College in California. In the former, those charged include the former director of admissions and the former director of the computer center. The accused in the California case are part-time student employees. The difference in rank is striking; employee malfeasance is not limited to any one stratum.)
There's not a lot that can be done technically to prevent abuse of authority attacks. The best that can be done is to create copious log files, recording what was done, by whom, and when. Statistical analysis can spot who seems to be doing too many unusual or legitimate attacks. In event of a prosecution, these logs can be correlated with, say, bank deposit records. Of course, this means that the organization's own logs need to be retained for long enough; failure to do that was a problem in the Greek cellphone tapping scandal. Other defenses are more procedural: two-party control, audits, and so on. (Note, of course, that technical mechanisms may be needed to support these defenses: are log files sufficient for an audit, is there proper support for two-party control, etc.?)
In knowledge-based attacks, the bad guy uses non-public knowledge gained in the course of his or her employment. This may be the identity of valuable targets — i.e., which machine has the master database? — details of how systems work; insider terminology or names of employees for use in social engineering attacks; even passwords.
Defenses here depend on what type of knowledge is being exploited. As always, log files help. An intrusion detection system (IDS) is one useful approach, though certain insiders will know what the system is looking for, what the action thresholds are, where honeypots are located, etc. In some environments, compartmentalization of information can help, though that can have a deleterious effect on productivity.
Ideally, a system is secure no matter how much an enemy knows. This principle — that one should not rely on security through obscurity — was set forth by Kerckhoff in 1883 in his La Cryptographie Militaire: "Il faut qu'il n'exige pas le secret, et qu'il puisse sans inconvénient tomber entre les mains de l'ennemi." (Roughly, "the system must not require secrecy and can fall into enemy hands without causing inconvenience.") Keeping certain information secret is a perfectly valid first line of defense, but it shouldn't be the only one.
The third form of insider attack is based on privileged access. In this case, the attacker has already passed some of the defenses, such as firewalls. The failure here can be one of two different types. First, the outside-facing defense may be the only layer. In all but the smallest organizations, this is wrong. There is no reason why all employees should have all possible access privileges. In the second case, the rogue employee can use his or her access to exploit other security failings, such as buggy code. (I noted in 1994 that the primary purpose of a firewall is to keep bad guys away from buggy code.) Defense is depth is a good idea; the failure here of one layer shows the benefit of the others.
Privileged access attacks are the most amenable to technical solutions. Certainly, intrusion detection software is a useful tool here, though its primary output — that something bad has happened — is useful whether the attack came from inside or outside. We can also strengthen access control rules to limit access to only those people or programs who need it. Often, of course, the underlying system doesn't support such fine-grained access control. For example, in one common configuration of the PHP web scripting language, it is impossible to secure data files used by one PHP application from any other PHP application on the same web server — and that includes user-written PHP scripts if your web server permits those. This isn't much of an issue for large companies, who can use dedicated servers for each application; it is a problem for smaller organizations.
It is important to realize that insider attacks are a real threat. The CIA had Aldrich Ames and the FBI had Robert Hanssen. A recent survey (by, admittedly, a party who stands to benefit) suggests that many companies believe they are being infiltrated by organized crime. Another report says that 89% of fraud (note: this is fraud in general, not computer crime) is committed by insiders; worse yet, 60% of fraudsters are members of senior management or board members.
It is clear that there is no one solution to insider threats. Business process is a lot of the answer. Background checks, especially in the national security arena, are another. Computer scientists have their own role to play, both on their own and by supporting the business process.