December 2011
Lessons from Suppressing Research (25 December 2011)
Weird Idea of the Day (27 December 2011)
Weird Idea of the Day -- Analysis (28 December 2011)

Lessons from Suppressing Research

25 December 2011

A recent news story noted that a U.S. government agency had asked some researchers to withhold crucial details about an experiment that showed that the avian flu strain A(H5N1) could be changed to permit direct ferret-to-ferret spread. While the problem the govenment is trying to solve is obvious, it’s far from clear that suppression is the right answer, especially in this particular case.

There are a few obvious parallels to other situations. Most notably, in 1940 American physicists decided to stop publishing papers on nuclear fission. That fact itself — the absence of published research — convinced at least one Soviet scientist, G.N. Flyorov, that the Americans and British were working a bomb. Arguably, this was the crucial factor in the Soviet decision to proceed with their project; certainly Flyorov mentioned this aspect in his letter to Stalin, and Stalin took that point very seriously. (I apologize for pointing to a paper behind a paywall; it’s the most authoritative reference I know of. You can find other discussion here and on Flyorov’s Wikipedia page.) In this case, while secrecy may have concealed important details, it gave away "the high-order bit": the area looked promising enough to investigate, despite the exigencies of wartime.

That moratorium was voluntary. In the 1960s and 1970s, though, the NSA tried to suppress outside knowledge of cryptography and NSA’s own work; more to the point, they also tried to suppress civilian academic research on cryptography. There were obvious constitutional problems with that, but the Public Cryptography Study Group (formed by the American Council on Education in response to the NSA’s call for a dialog) recommended a voluntary system: researchers could submit their papers to NSA; it in turn could request (but not demand) that certain things not be published.

As a vehicle for stopping or even slowing research, this notion was a failure. Possibly, the NSA’s intelligence-gathering efforts have been hurt by widespread knowledge of cryptography; certainly, there’s far more information out there today than there was a generation ago. In a very strong sense, though, they’ve won by losing: their real mission of protecting the country has been helped by the flourishing of cryptography for civilian use. To give just one example, cell phone cloning in the 1990s was largely done for drug dealers who wanted to be able to make and receive calls anonymously. Today, though, cryptographic authentication is used, eliminating an entire class of attacks.

It’s also worth pointing to the tremendous achievements by academic cryptographers who have shown how to do more with modern cryptography than exchange keys and encrypt and sign messages. What James Ellis, the GCHQ researcher who invented non-secret encryption — what today is called public key cryptography — once said to Whit Diffie is quite accurate: "You did more with it than we did". But the NSA tried to suppress the entire field.

A third example is more recent still: the full disclosure of the details of security holes in software. It is still debated if it’s a net benefit or not: do we benefit if the bad guys also learn of the attacks?. On the other hand, it’s indisputable that many holes are closed (or closed promptly) solely because of disclosure or the threat thereof. Too many companies respond to reports of attacks by denying them, questioning the competence or integrity of the discoverer, or even using legal means to try to suppress the report. Far too often, it seems, bugs are fixed only because of this public disclosure; without that, they’d remain unfixed, leaving systems vulnerable to anyone who rediscovered the attack.

The conclusion, then, is that suppression has greater costs than it might seem. But what about this case? As before, as is shown in an interview with one of the scientists involved, Ron A. M. Fouchier, there are costs and benefits. For one thing, what these guys did can’t easily be replicated in a garage lab by amateurs: "You need a very sophisticated specialist team and sophisticated facilities to do this." Terrorists have easier ways to launch bioattacks:

You could not do this work in your garage if you are a terrorist organization. But what you can do is get viruses out of the wild and grow them in your garage. There are terrorist opportunities that are much, much easier than to genetically modify H5N1 bird flu virus that are probably much more effective.
And finally, there’s the cost of suppression. It is clear from the interview that public health officials need to know the details, so they know which flu mutations to watch for. Too many people need to know for secrecy to be effective:
We would be perfectly happy if this could be executed, but we have some doubts. We have made a list of experts that we could share this with, and that list adds up to well over 100 organizations around the globe, and probably 1,000 experts. As soon as you share information with more than 10 people, the information will be on the street. And so we have serious doubts whether this advice can be followed, strictly speaking.
(I have personal experience with this. Some 20 years ago, I invented DNS cache contamination attacks. After talking with various people, I decided not to publish; choosing instead to share the paper with trusted colleagues and with CERT. These colleagues, in Washington and elsewhere, undoubtedly shared it further still. Perhaps someone shared it imprudently, perhaps it was stolen by hacking, or perhaps the bad guys rediscovered the attack, but eventually the attack showed up in the wild — at which point I published. I concluded that the real effect of the delay was to hinder the development of countermeasures. In other words, I was wrong to have held back the paper.)

The ultimate decision may rest on personal attitudes. To quote Fouchier one more time, "The only people who want to hold back are the biosecurity experts. They show zero tolerance to risk. The public health specialists do not have this zero tolerance. I have not spoken to a single public health specialist who was against publication."

Weird Idea of the Day

27 December 2011

On a cryptography mailing list, someone asked how to check for "similar" passwords if all that was stored was a hashed value. The goal, of course, is to prevent people from doing things like adding a period, incrementing a digit, etc. Partly in jest, I suggested publishing the old password when a new one is set. That would also discourage people from using the same password for multiple services.

It’s an evil idea, of course — but now I’m wondering if it might actually make sense…

Tags: security

Weird Idea of the Day -- Analysis

28 December 2011

Yesterday, I posted a deliberately provocative idea, one I called "evil": disclosing old passwords when the password is changed. The objections were predictable; I agree with many of them. The idea, the problems with it, and what they say about passwords are worth a deeper analysis.

The most trenchant objection was from people who reuse the same passwords on multiple sites. Precisely: this, in my opinion, is the single biggest security flaw with passwords as they’re used today. Never mind the old bugaboo about guessable passwords; while they’re sometimes an issue (see below), today’s attackers are generally more likely to use keystroke loggers to collect passwords as they’re typed, launch phishing attacks, or hack web sites and collect them en masse, especially from poorly-designed servers that store them in the clear. In the latter case, you’re in serious trouble if you’ve reused a login/password pair, because the attacker now knows your credentials for many other sites.

The defense is simple: don’t do that. Don’t reuse passwords.

I can hear the objections to this, too: people have too many passwords to remember, especially if they’re all "strong".  I agree; I have over 100 passwords for web sites alone. The solution I suggest is obvious, if you’re willing to ignore religious doctrine: use a pseudo-random generator to produce as many "strong" passwords as you need, and store them somewhere safe and convenient. That’s what I do; for me, "safe and convenient" is encrypted cloud storage, so I can get at them from the three computers and two iToys I regularly use. For other people, it might be on an encrypted flash drive, or a piece of paper, or even the proverbial yellow sticky attached to the monitor.

Yes, it’s heretical; we’ve been told for decades that we shouldn’t do that. But security isn’t a matter of being in a state of grace, nor is insecurity equivalent to sin. Rather, it’s a question of cost-effective defenses against particular threats. For most passwords — repeat, most — the threat is not someone who has wandered into your home office, nor is someone likely to mug you for the flash drive on your keychain in order to learn the password to your Twitter account. Rather, they’ll plant keystroke loggers and hack servers. They may resort to guessing attacks, but almost always that’s done for targeted attacks, where they’re trying to get you in particular. Against ordinary, large-scale attacks, that isn’t done as much; there’s not nearly as much benefit to the attacker. But if you follow my suggestion, you can make your random passwords as strong as you wish, at no extra cost; software can remember R@s1o+=/)ket‘ as easily as it can 123456.

Now — there are passwords one perhaps shouldn’t treat that way. Important passwords for access within your organization, where physical access to your monitor becomes a concern, shouldn’t get the yellow sticky treatment. The same might hold true for bank account passwords, even on home monitors. Again, though, you have to analyze the threat model: who would benefit from your passwords, and how would they be able to get them?

Let’s turn back to my evil idea. The second most common objection (other than "I can’t remember that many strong passwords") had to do with patterns of passwords. Again, yes; that’s precisely the weakness (and it is a weakness; people have written programs to guess new passwords based on old ones) the idea was intended to combat. A more interesting question is what threat model you’re trying to guard against if you bar similar successor passwords. The only possible answer is that you think an adversary (a) has a given old password, (b) hasn’t yet extracted all necessary value from it; and (c) values it (as opposed to an arbitrary one on that site) enough to launch a similarity attack against against precisely that user. I suspect that such cases are quite rare. Ordinary passwords are available in bulk; useful financial passwords are often employed quickly to loot an account; login passwords are used quickly to plant back doors.

So — why was my original idea "evil", if it it defended against a serious problem or didn’t really affect user security in any real sense? The problem with it is that it encounters extremely strong user resistance. I can’t think of any countermeasures, but there’s rarely much benefit to annoying your users that much. If you have that much of a problem with bad passwords, the proper solution is to use a better authentication mechanism, not to make people very unhappy.

One more thing: when analyzing security behavior, look at the threat model and ignore religion. The classic Morris and Thompson paper on password security taught us an attack — password-guessing — but it does not tell us when this is a threat. Treating it as holy writ divorced from the surrounding reality does no one any good.

Tags: FTC