Useful Links

Recent Posts

Archive

Meltdown and Spectre: Security is a Systems Property

4 January 2018

I don't (and probably won't) have anything substantive to say about the technical details of the just-announced Meltdown and Spectre attacks. (For full technical details, go here; for an intermediate-level description, go here.) What I do want to stress is that these show, yet again, that security is a systems property: being secure requires that every component, including ones you've never heard of, be secure. These attacks depend on hardware features such as "speculative execution" (someone I know said that that sounded like something Stalin did), "cache timing", and the "translation lookaside buffer"—and no, many computer programmers don't know what those are, either. Furthermore, the interactions between components need to be secure, too.

Let me give an example of that last point. These two attacks are only exploitable by programs running on your own computer: a hacker probing from the outside can't directly trigger them. Besides, since the effect of the flaws is to let one program read the operating system's memory, single-users computers, i.e., your average home PC or Mac, would seem to be unaffected; the only folks who have to worry are the people who run servers, especially cloud servers. Well, no.

Most web browsers support a technology called JavaScript, which lets the web site you're visiting run code on your computer. For Spectre, "the Google Chrome browser… allows JavaScript to read private memory from the process in which it runs". In other words, a malicious web site can exploit this flaw. And the malice doesn't have to be on the site you're visiting; ads come from third-party ad brokers.

In other words, your home computer is vulnerable because of (a) a hardware design flaw; (b) the existence of JavaScript; and (c) the economic ecosystem of the web.

Security is a systems property…

Bitcoin—The Andromeda Strain of Computer Science Research

30 December 2017

Everyone knows about Bitcoin. Opinions are divided: it's either a huge bubble, best suited for buying tulip bulbs, or, as one Twitter rather hyperbolically expressed it, "the most important application of cryptography in human history". I personally am in the bubble camp, but I think there's another lesson here, on the difference between science and engineering. Bitcoin and the blockchain are interesting ideas that escaped the laboratory without proper engineering—and it shows.

Let's start with the upside. Bitcoin was an impressive intellectual achievement. Digital cash has been around since Chaum, Fiat, and Naor's 1988 paper. There have been many other schemes since then, with varying properties. All of the schemes had one thing in common, though: they relied on a trusted party, i.e., a bank.

Bitcoin was different. "Satoshi Nakamoto" conceived of the block chain, a distributed way to keep track of coins, spending, etc. Beyond doubt, his paper would have been accepted at any top cryptography or privacy conference. It was never submitted, though. Why not? Without authoritative statements directly from "Nakamoto", it's hard to say; my own opinion is that it originated from the anarchist libertarian wing of the cypherpunk movement. Cypherpunks believe in better living through cryptography; a privacy-preserving financial mechanism that is independent of any government fulfilled one of the ideals of the libertarian anarchists. (Some of them seemed to believe that the existence of such a mechanism would inherently cause governments to disappear. I don't know why they believed this, or why they thought it was a good idea, but the attitude was unmistakable.) In any event, they were more interested in running code than in academic credit.

So what went wrong? What happened to a system designed as an alternative to, e.g.., credit cards where the "cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for small casual transactions"? Instead, today the Bitcoin network is overloaded, leading to high transaction costs. The answer is a lack of engineering.

When you engineer a system for deployment you build it to meet certain real-world goals. You may find that there are tradeoffs, and that you can't achieve all of your goals, but that's normal; as I've remarked, "engineering is the art of picking the right trade-off in an overconstrained environment". For any computer-based financial system, one crucial parameter is the transaction rate. For a system like Bitcoin, another goal had to be avoiding concentrations of power. And of course, there's transaction privacy.

There are less obvious factors, too. These days, "mining" for Bitcoins requires a lot of computations, which translates directly into electrical power consumption. One estimate is that the Bitcoin network uses up more electricity than many countries. There's also the question of governance: who makes decisions about how the network should operate? It's not a question that naturally occurs to most scientists and engineers, but production systems need some path for change.

In all of these, Bitcoin has failed. The failures weren't inevitable; there are solutions to these problems in the acdemic literature. But Bitcoin was deployed by enthusiasts who in essence let experimental code escape from a lab to the world, without thinking about the engineering issues—and now they're stuck with it. Perhaps another, better cryptocurrency can displace it, but it's always much harder to displace something that exists than to fill a vacuum.

Voluntary Reporting of Cybersecurity Incidents

4 December 2017

One of the problems with trying to secure systems is the lack of knoweldge in the community about what has or hasn't worked. I'm on record as calling for an analog to the National Transportation Safety Board: a government agency that investigates major outages and publishes the results.

In the current, deregulatory political climate, though, that isn't going to happen. But how about a voluntary system? That's worked well in avaiation—could it work for computer security? Per a new draft paper with Adam Shostack, Andrew Manley, Jonathan Bair, Blake Reid, and Pierre De Vries, we think it can.

While there's a lot of detail in the paper, there are two points I want to mention here. First, the aviation system is supposed to guarantee anonymity. That's easier in aviation where, say, there are many planes landing at O'Hare on a given day than in the computer realm. For that reason (among others), we're focusing "near misses"—it's less revelatory to say "we found an intruder trying to use the Struts hole" than to say "someone got in via Struts and personal data for 145 million people was taken".

From a policy perspective, there's another important aspect. The web page for ASRS is headlined "Confidential. Voluntary. Non-Punitive"— with the emphasis in the original. Corporate general counsels need assurance that they won't be exposing their organizations to more liability by doing such disclosures. That in turn requires buy-in from regulators. (It's also another reason for focusing on near-misses: you avoid the liability question if the attack was fended off.)

All this is discussed in the full preprint, at LawArxiv or SSRN.

Facebook's Initiative Against "Revenge Porn"

16 November 2017

There's been been a bit of a furor recently over what Facebook calls its "Non-Consensual Intimate Image Pilot". Is it a good idea? Does it cause more harm than good?

There's no doubt whatsoever that "revenge porn"—intimate images uploaded against the will of the subject, by an unhappy former partner—is a serious problem. Uploading such images to Facebook is often worse than uploading them to the web, because they're more likely to be seen by friends and family of the victim, multiplying the embarrassment. I thus applaud Facebook for trying to do something about the problem. However, I have some concerns and questions about the design as described thus far. This is a pilot; I hope there will be more information and perhaps some changes going forward.

My evaluation criterion is very simple: will this scheme help more than harm? I'm not asking how effective the scheme is; any improvement is better than none. I'm not asking if Facebook is doing this because they really care, or because of external pressure, or because they fear people leaving their platforms if the problem isn't addressed. Those are internal questions; Facebook as a corporation is more competent to evaluate those issues than I am.

There are two obvious limitations that I'm very specifically not commenting on: first, that Facebook is only protecting images posted on one of their platforms, rather than scouring the web; second, that the victim has to have a copy of the images in question. Handling those two cases as well would be nice—but they're not doing it, and I will not comment here on why or why not, or on whether they should.

I should also note that I have a great deal of respect for Facebook's technical prowess. It is somewhere between quite possible and very probable that they've already considered and rejected some of my suggestions, simply because they don't work well enough. More transparency on these aspects would be welcome, if only to dispel people's doubts.

The process, as described, involves the following steps. My comments on each step are indented and in italics.

The part that concerns me the most is the image submission process. I'm extremely concerned about new phishing scams. How will people react to email messages touting the "new, one-step, image submission site", one that handles all social networks and not just Facebook? The two-step process here—a web site plus an unusual action on Facebook—would seem to exacerbate this risk; people could be lured to a fake website for either step. The experience with the US government-mandated portal for free annual credit reports doesn't reassure me; there are numerous scam versions of the real site. A single-button submission portal would, I suspect, be better. Does Facebook have evidence to the contrary? What do they plan to do about this problem?

There has been criticism of the need for an upload process. Some have suggested doing the hashing on the submitter's device. Facebook has responded that if the hashing algorithm were public, people would figure out ways around it. I'm not entirely convinced. For example, it's been a principle of cryptographic design since 1883 that "There must be no need to keep the system secret, and it must be able to fall into enemy hands without inconvenience."

However… It may very well be that Facebook's hash algorithm does not meet Kerckhoffs's principle, as it is known, but that they don't know how to do better. Fair enough—but at some point, it's not unlikely that the algorithm will leak, or that people will use trial-and-error to find something that will get through. However, under my evaluation criterion—is this initiative better than nothing?—Facebook has taken the right approach. If the algorithm leaks or if people work around it, we're no worse off than we are today. In the meantime, keeping it secret delays that, and if Facebook is indeed capable of protecting the images for the short time they're on their servers (and they probably are) there is no serious incremental risk.

Another suggestion is to delay the human verification step, to do it if and only if there's a hash match. While there's a certain attractiveness to the notion, I'm not convinced that it would work well. For one thing, it would require near-realtime review, to avoid delays in handling a hash match. I also wonder how many submitted images won't be matched—I suspect that most people will be very reluctant to share their own intimate images unless they're pretty sure that someone is going to abuse them by uploading such pictures. By definition, these are very personal, sensitive pictures, and people will not want to submit them to Facebook in the absence of some very real threat.

My overall verdict is guarded approval. Answers to a few questions would help:

But I'm glad that someone is finally trying to do something about this problem!


Update: I'm informed that the pilot is restricted to people over 18, thus obviating any concerns about transmission of child pornography.

Historical Loop

27 October 2017

I'm currently reading Liza Mundy's Code Girls, a book about the role that American women played in World War II cryptanalysis. (By coincidence, it came out around the same time as The Woman Who Smashed Codes, a biography of Elizebeth Friedman, one of the greatest cryptanalysts in history.) Mundy notes that the attack on Japan's PURPLE machine was aided by a design feature: PURPLE encrypted 20 letters separately from 6 other letters. But why should the machine have been designed that way?

PURPLE, it turns out, was a descendant of RED, which had the same 20/6 split. In RED, though, the 6 letters were the vowels; the ciphertext thus preserved the consonant versus vowel difference from the plaintext. But why was that a desirable goal?

The answer was economy. Telegraph companies of the time charged by the word—but what is a "word"? Is ATOY a word? Two words? What about "GROUP LEADER"? In English, that's two words, but the German "GRUPPENFÜHRER" is one word. Could an English speaker write "GROUPLEADER" instead?

The exact rules were a subject of much debated and were codified into international regulations. One rule that was adopted was to permit artificial words if they were pronounceable, which in turn was instantiated as a minimum density of vowels. So, to save money, the Japanese cryptologists designed RED to keep the (high) vowel density of Japanese as rendered in Romaji.

These rules were hotly debated. One bitter opponent of any such rules was William Friedman, himself a great cryptanalyst (and the husband of Elizebeth) and the administrative head of the US Army group that eventually broke PURPLE.

So: if Friedman's 1927 advice had been followed, RED would not have treated vowels differently, PURPLE wouldn't have had the 20/6 split, and Friedman's group might have been denied its greatest triumph.

Another Thought About KRACK

16 October 2017

I don't normally blog twice in one day (these days, I'm lucky to post twice in one month), but a nasty thought happened to occur to me, one that's worth sharing. (Thinking nasty thoughts is either an occupational hazard or an occupational fringe benefit for security people—your call…)

I, along with many others, noted that the KRACK flaw in WiFi encryption is a local matter only; the attacker has to be within about 100 meters from the target. That's not quite correct. The attacking computer has to be close; the attacker can be anywhere.

I'm here at home in a Manhattan apartment, typing on a computer connected by wired Ethernet. The computer is, of course, WiFi-capable; if I turn on WiFi, it sees 28 other WiFi networks, all but two of which use WPA2. (The other two are wide open guest networks…) Suppose someone hacked into my computer. They could activate my computer's WiFi interface and use KRACK to go after my neighbors' nets. Better yet, suppose I'm on a low-security wired net at work but am within range of a high-security wireless network.

I'm not certain how serious this is in practice; it depends on the proximity of vulnerable wired computers to interesting WiFi networks. Wired networks are no longer very common in people's houses and apartments, but of course they're the norm in enterprises. If you're a sysadmin for a corporation with that sort of setup, KRACK may be very serious indeed.