27 September 2010
A few months ago, I wrote about Stuxnet, a digitally-signed worm that used a previously-unknown Windows vulnerability to attack SCADA systems. I called it "scary". Much more has been learned about it since then; many experts are calling it a cyberweapon developed and launched by a nation-state, probably against Iran, and possibly against the Iranian nuclear program. I don't know if I'd go quite that far; what I will say is that Stuxnet was written by a group very with impressive resources and a great deal of expertise, and was precisely aimed at a very high-value target. The existence of this code poses some fascinating issues, and poses both threats and opportunities. I will state categorically that I think that Stuxnet should settle the debate about the possibility of weaponized software; someone clearly has the ability to gather the intelligence and build the software necessary to achieve military goals. Whether or not this is such an incident is a separate issue; the capability demonstrably exists.
I should add a disclaimer: I haven't done any of my own analysis of Stuxnet, nor have I even seen any technical papers; I'm relying on news articles and blog postings. I certainly don't have any inside information whatsoever.
Let me first summarize what is known. This is a brief summary; I'm omitting all of the interesting technical details. For those, I refer you to the two blog entries I cite above.
Stuxnet uses at least four so-called "0-days" — attacks that are not yet known by the vendor or the security community. (It is a sad fact that most penetrations are due to holes for which patches exist.) It included code that was digitally signed by keys belonging to reputable companies. It spread by a variety of mechanisms, including USB flash drives and network connections. It can be controlled and updated by several different mechanisms, including a specific domain and a peer-to-peer network. It targeted Siemens SCADA systems. It checked enough details of the exact SCADA system it is running on to ensure that its damage is only done to a very specific target. If you had a Siemens SCADA system controlling your basement chemical plant, you'd probably be quite safe — unless you, and only you, were the target. When it finds its target, it reprograms the so-called PLCs (programmable logic controllers) to do something — but just what isn't knowable without knowing the precise details of that particular installation. The software contained "rootkits" — software to hid the existence of the penetration — not just for Windows, but also for the PLCs. Finally, more than 50% of the known infections are in Iran; Indonesia, Pakistan, and India have also seen significant numbers.
I conclude that Stuxnet was aimed at a high-value target because of the multiplicity of mechanisms it uses. 0-days, though by no means unknown, are comparatively rare. To use (at least) four in one attack means that someone really wanted the attack to succeed, despite the chance that one or more would be patched by Microsoft or (inadvertently) blocked by the site's configuration. When you use a 0-day, you may "spend" it, as was pointed out in a Rand Corporation study; if your attack software is discovered (as was the case here), the holes will be patched. But someone was willing to spend four of them on a single attack.
The presence of that many 0-days itself suggests that the attacker has a lot of resources. We can add to that access to and experience with Siemens SCADA systems. Windows hackers are quite common; SCADA systems hackers are much rarer. SCADA hackers who can develop rootkits are a rare species indeed.
The code was digitally signed, which implies that the attacker had somehow gained access to private keys that should have been closely guarded. We don't know how those leaked.
The attack software verified that it was in the right place and issued commands to the PLCs, commands that are meaningless without very specific target knowledge. How did the attacker learn these details? Inside help? We don't know, and analyzing the code isn't likely to tell us.
Someone really wanted to do something to a particular target. But who wanted to do what to whom? The evidence that it was aimed at the Iranian nuclear program is circumstantial: there's a high density of infections there, and what target could be of more interest to a sophisticated attacker? Some sources suggest that it was Israel or the U.S.: "both have the skill and resources to produce complicated malware such as Stuxnet". There was even a news story quoting a former Israeli security cabinet member as saying "We came to the conclusion that, for our purposes, a key Iranian vulnerability is in its on-line information. We have acted accordingly." One analyst cites evidence that Stuxnet has already struck at the Natanz centrifuge facility. Al Jazeera says that Iranian nuclear facilities have been hit by it, but the report gives few details other than to say that there was no damage. And the Wall Stree Journal quotes Iranian officials as saying that "some personal computers of the Bushehr nuclear-power plant workers are infected with the virus." That story makes another interesting comment:
The U.S. would be a less likely suspect because it uses offensive cyber operations infrequently and usually only under very specific circumstances when officials are confident the operation will affect only its target, current and former U.S. officials said. It has opted against cyber-attack proposals when the effect was unpredictable, as it did when it considered then rejected the possibility of mounting a cyber attack on Iraq's financial system before the 2003 invasion. Stuxnet, by contrast, has affected a broad range of targets.As noted, though, Stuxnet is harmless except to the intended target.
A New York Times article has a different slant: it asserts that the software isn't that sophisticated, because it spread so widely. I interpret that differently: it spread widely because whoever launched it had no direct access to the target system, but it only damaged that target. Elsewhere, it was harmless, more or less the cyber equivalent of cytomegalovirus: very many people carry it, but almost no one is made ill.
What was the high-value target in Iran? It's hard to say. We are told that "VirusBlokAda, an obscure Belarusian security company, found it on computers belonging to a customer in Iran". Finding malware, especially malware that has gone to some trouble to conceal itself, isn't easy. Some Iranian company was suspicious enough to seek outside analytic help. After a facility failed? Perhaps. A nuclear facility? No data. A high-value facility? Per the above, probably; a low-value facility probably wouldn't have noticed anything, since the rootkits would have obscured the presence of Stuxnet and no damage would have been done except to the target facility. It does seem that there are many other facilities within Iran that a very sophisticated attacker might go after; it's only people on the outside who think only of the nuclear weapons complex.
What are the implications? One obvious conclusion is that there are a lot of systems that were previously thought to be safe that have to be considered at risk. Some unknown party has the ability to launch this grade of attack. Other enemies or potential enemies need to take this ability into account. One possible response, of course, is to develop their own cyberattack capabilities. In that respect, the very public analysis of Stuxnet is going to educate people: this is the way the pros do it. The specific holes exploited may not be worth much any more; the style of the attack will be very educational indeed. It is said that an entire generation of civilian cryptologists cut its teeth on DES, the first example of an NSA-approved cipher to be made public. Will the same thing happen here? If so, even the attacker is at greater risk now than before.
The ability to do precision targeting is quite intriguing. One concern about cyberwar is the potential for damage to civilian infrastructure, which is against international law. Stuxnet shows that (under the right circumstances) attacks can be very carefully directed. That, to my knowledge, had not been anticipated in writings on the subject.
the United States conducts many highly visible military training exercises involving both its conventional and nuclear forces, at least in part to demonstrate its capabilities to potential adversaries.Stuxnet is a capability demonstration, though by an as yet unknown party. If you are a general who believes that Stuxnet came from an enemy of your nation, you now have some idea of that enemy's cyberattack capabilities. Will this promote a cyberarms race? Or will it help keep the peace, much as Mutually Assured Destruction (rightly known as "MAD") deterred nuclear exchanges during the Cold War?
On the other hand, U.S. capabilities for offensive cyber operations are highly classified… To the extent that U.S. capabilities for cyber operations are intended to be part of its overall deterrent posture, how should the United States demonstrate those capabilities? Or is such demonstration even necessary given widespread belief in U.S. capabilities?
There is one more implication that has implications for defense: a so-called ".secure" network isn't a strong defense. Attempting to isolate critical networks still leaves the door open to other attack vectors, such as infected USB drives. The question that has to be asked is how to balance the incremental risk from Internet connections with the benefits, such as greater ability to implement a Smart Grid.
There are still many questions that haven't been answered, at least publicly, about Stuxnet. There are some that I suspect will never be answered in the open literature. But as I said in the first paragraph, I think we now have an existence proof for weapons-grade attack software. Policy-makers around the world need to take this into account; claiming it can't happen is no longer tenable. The real question is the cost of this sort of attack. Remember, though, that a single F-35 fighter plane is estimated to cost $112M 2010 dollars; that's not exactly cheap, either.
23 September 2010
There's a short discussion roundtable on the NY Times website on social network site security. I was asked to participate; you can find my post there.
(I know my recent blog posts have appeared on other sites instead of this one. That doesn't represent a new policy on my part; it's purely coicidence: I was asked to comment on things I probably wouldn't have blogged about on my own. I may have something more to say about Stuxnet soon, but I'm waiting for the dust to settle; there are too many new news stories showing up.)
14 September 2010
According to Ars Technica, Intel has announced a plan for "known good" restrictions on programs. Only programs from trusted sources would be executable. I've written a long essay for the Concurring Opinions legal blog on why this is a very bad idea; rather than recreate it here with all of the links re-added, I'll just refer everyone to that posting.
8 September 2010
I'm currently an invited guest blogger on the Concurring Opinions blog for an online symposium discussing Jonathan Zittrain's book The Future of the Internet — And How To Stop It. Concurring Opinions is primarily a law blog; for this discussion, a number of technologists have been invited as well.
I invite everyone to join me and my colleagues over there!