January 2010
Why I Won't Buy an E-book Reader -- and When I Might (13 January 2010)
Google, China, and Lawful Intercept (13 January 2010)
Why Isn't My Web Site Encrypted? (16 January 2010)

Why I Won't Buy an E-book Reader -- and When I Might

13 January 2010

There have been many news stories lately about ebook readers. The New York Times said that they were prominently featured at the Consumer Electronics Show. Amazon is pushing its Kindle; Barnes and Noble has its Nook. There are many other aspirants, either on the market now or waiting in the wings. For now, though, I’m sitting on the sidelines.

Many of my objections are familiar. Some readers, like the Kindle, use proprietary formats. The Kindle and the Nook are optimized for buying books from a single vendor — bye bye, competition, and if the vendor decides that the product is obsolete or the company folds, I’m left with not just another electronic paperweight, I may also lose access to my books. Speaking of which — could Amazon possibly have found a less-apt target for retroactively not selling something than George Orwell’s 1984? You can’t make up stuff like that!

The issue of vendor control is a very deep and troubling one. Avi Rubin has pointed out that Amazon decides when or if they’re going to update the software on Kindles; this is, to say the least, suboptimal. If you buy a product because it has certain features and the vendor later removes those features, have they violated your rights? To be sure, their lawyer probably stuck some clauses in the shrink-wrap license, but you almost certainly didn’t read it…

Then there are format issues. Amazon has their own, proprietary format, which is part of the whole vendor lock-in. I can’t give away or lend books the way I can with physical objects, save for the very restricted lending with the Nook. Even then, you can only lend the book to another Barnes and Noble customer. Yes, I understand the publishers’ and vendors’ motives for imposing such restrictions. They have their own needs and goals, some of them very legitimate. That said, my goal is to optimize for my own interests, not theirs; often, though, theirs and mine conflict, and for now my interests are better served by dead tree editions.

Beyond that, I spend far too much of my life on airplanes. I can read a physical book when the plane is below 10,000 feet; I’m not allowed to use an electronic devices. Yes, it would be nice to cut my carry weight for books on long trips, but even that doesn’t quite tempt me.

Given all that, why am I still mulling the idea? I have a lot of books. Strike that — I have a LOT of books. I don’t know how many, even approximately; I do know that they occupy at least 170 linear feet (more than 50 meters) of shelf space. And that’s just my books; the family is considerably larger. I want an ebook reader that not only lets me buy new books, but gives me access to my old ones.

I certainly don’t want to repurchase all of my old books. In an intellectual property sense, I shouldn’t have to; after all, I’ve already paid the "license" fee for the copyrighted content. Right now, I just want to upgrade the medium. Besides, some of the books are quite old, when they were much cheaper they would be if purchased today: the book in my backpack right now for reading on the train to and from Manhattan cost me $1.50 when it was new, more than 40 years ago. Still, I don’t see an economic model; there’s not that large or lucrative a resale market for them, and almost certainly not enough to pay for new, digital editions, even assuming that they’re now in print electronically. Still, that’s what I really want.

I strongly suspect I’m not the only one in this position. People who read lots of books are the natural market for high-priced ebook readers. The first vendor to solve the library problem will probably win a lot of sales, all of the other issues notwithstanding.

Tags: copyright

Google, China, and Lawful Intercept

13 January 2010

Like many people, I was taken by surprised by Google’s announcement about its threatened withdrawal from China in the wake of continued censorship and attacks that appeared to emanate from there. My immediate reaction was quite simple: "Wow".

There’s been a lot of speculation about just why they pulled out. Some reports noted that Google has been losing market share to Baidu. Under those circumstances, cutting losses makes sense. Yahoo and many other Western companies have done that.

I don’t think, though, that that’s the whole story. Blaming China not for its rules, which the Chinese government defends, but for hacking is an entirely different kettle of fish. That is a move more or less guaranteed to raise the ire of Chinese government officials, and quite likely block the return of Google to China for a very long time. And, of course, there’s no reason to think that if China has indeed been attacking Google, this will make it stop — quite the contrary, I suspect.

There is, I suppose, a line of reasoning that assumes that China will retaliate for the insult by blocking access to all of Google’s services, including gmail; this in turn might mean less use of gmail by Chinese dissidents, which in turn would give the government less reason to hack Google. I don’t buy it. There are lots of other reasons to hack. The Wall Street Journal says

Much of the data stolen from Google was its "core source code," Mr. Mulvenon said. "If you have the source code, you can potentially figure out how to do Google hacks that get all kinds of interesting data." Among the data, would be the information needed to identify security flaws in Google’s systems, he said.
Beyond that, the source code to much of Google’s infrastructure has immense value, though I should add the caveat that running an operation of that scale requires a lot more than a code base. All in all, this looks like extremely rare case of a foreign company taking a stand on human rights. In fact, the Wall Street Journal unambiguously credits Sergei Brin for the initiative.

The most interesting aspect of the whole affair, though, might be one of the ways the attacker got in. Matt Blaze pointed me at an article that states that the attackers abused the "lawful intercept systems" — the mechanism that Google uses to comply with subpoenas. If this is true, it represents another major abuse of such mechanisms, probably second only to the Athens Affair, where parties unknown used an analogous mechanism in a Greek cell phone switch to eavesdrop on some mobile phone calls in Athens.

Unfortunately, I can’t say I’m surprised that such things can happen. My colleagues and I have been warning for years of the risks of schemes to ease government access. (There are a number of papers and essays on the subject on my web page.) The proper question is no longer whether or not lawful intercept schemes are dangerous; I think that question is now settled. Rather, we must ask this: are the dangers from lack of government access to nasty people’s communications greater or less than the dangers from other nasty people abusing these self-same mechanisms? I don’t think that that perspective has been adequately addressed.

Given that, another Google announcement — that they’re turning on https by default for gmail users — is quite intriguing. Six months ago, I was one of the signatories on a letter that Christopher Soghoian drafted calling for just such an action. The official word is that https would not have prevented these attacks:

Sam Schillace, an engineering director at Google Apps, said the shift to default HTTPS was not prompted by the attacks and, to the best of his knowledge, would not have averted them. The move had been in the works for some six months, during which time Google engineers did extensive testing and made numerous technical fixes to enable a smooth transition.

However, the announcement itself was prompted by the attack news. "The Gmail team decided, why wait?" he said. "We want our users to be as safe as we can make them be."

Indeed, if the lawful intercept mechanism was on the plaintext side of the decryptor, the new defense would indeed not have helped. But there are many other threats to communications, and it’s a lot easier for the Chinese government (or any other government) to tap communications on its own territory.

This is still a hot, breaking story, and I don’t claim to know everything or even close to everything about it. I’m sure that more details will come out over the next few weeks. Brian Krebs has an excellent summary article posted; I hope he’ll continue to update it. For the moment, though, my tentative conclusions are that genuine ethical concerns, possibly coupled with ire about the hacking, have led Google to take a step that may not be in their best long-term financial interests. Such behavior by corporations is rare but praiseworthy.


Update: I should have added — I do receive a small amount of research funding from Google. Virtually all of this money has gone towards student travel to conferences.

Why Isn't My Web Site Encrypted?

16 January 2010

In an NY Times Room for Debate posting, I urged a lot more use of encryption, even for routine posts. But my blog and web site are not encrypted. Why not? And can I fix it?

The short answer to the first question is simple: when I set up the blog, a few years ago, I just didn’t think about it. The second question, though, is remarkably hard to answer.

Proper web site design uses relative links. That is, instead of writing something like

<a href="http://www.cs.columbia.edu/~smb/blog/2010-01/2010-01-13a.html">…</a>
to refer to the previous post, I should simply write
<a href="2010-01-13a.html">…</a>
That makes it a lot easier to move web pages around. If people only viewed the blog as a web site, I would do that. But many people view it via a variety of RSS readers, which poses several problems.

First, many RSS readers don’t seem to do the right thing with relative links. Relative links that work perfectly well on the web site don’t work at all via RSS feeds. Maybe my directory structure is wrong for that; still, I haven’t gotten it to work. For that matter, links to postings in the RSS feed itself appear to need to be absolute. Again, maybe I’m doing it wrong, but I could never get that to work properly.

I also need to maintain backwards compatibility; I want all old links to continue to work.

There’s another problem: if you use https: (i.e., if you use an encrypted web page), you need a trust anchor, a starting point for the certificates that verify a site’s identity. Your browser has a lot built in; last time I checked, Firefox listed about 165 trust anchors (sometimes known as "certificate authorities" in this case). What trust anchors do RSS readers use? There only a handful of important browsers; there are many more RSS readers and aggregators. What about search engines? Whom do they trust? (Do search engines even crawl https:-protected pages? Content isn’t very findable unless it’s indexed by Google, Bing, Yahoo, etc.)

Finally, a noticeable portion of my web site is generated by programs. I’d have to modify the programs and/or their configuration files or wrapper scripts to spit out https: instead of http:, or possibly even create duplicate copies of pages. I’d also have to go back and fix up the absolute URLs when I can. I can’t just do a blind substitution, though, because things like BiBTeX entries need to contain the absolute references (to the https: copy?), rather than relative ones.

So what am I going to do? I will indeed upgrade the site to ensure that everything is accessible with or without encryption. It’s going to take a while to do that, especially because the semester starts in a few days and I’m not going to have much free time. But remember this: if I can’t do a flash cut to ubiquitous encryption, neither can a big web site like Google or the NY Times. Granted, being a web site maintainer isn’t my full-time job; on the other hand, my site is a lot less complex.