May 2017
The n^2 Problem (1 May 2017)
Physicality and Comprehensibility (8 May 2017)
Patching is Hard (12 May 2017)
Who Pays? (16 May 2017)

Who Pays?

16 May 2017

Computer security costs money. It costs more to develop secure software, and there’s an ongoing maintenance cost to patch the remaining holes. Spending more time and money up front will likely result in lesser maintenance costs going forward, but too few companies do that. Besides, even very secure operating systems like Windows 10 and iOS have had security problems and hence require patching. (I just installed iOS 10.3.2 on my phone. It fixed about two dozen security holes.) So—who pays? In particular, who pays after the first few years when the software is, at least conceptually if not literally, covered by a "warranty".

Let’s look at a simplistic model. There are two costs, a development cost $d and an annual support cost $s for n years after the "warranty" period. Obviously, the company pays $d and recoups it by charging for the product. Who should pay $n·s?

Zeynep Tufekci, in an op-ed column in the New York Times, argued that Microsoft and other tech companies should pick up the cost. She notes the societal impact of some bugs:

As a reminder of what is at stake, ambulances carrying sick children were diverted and heart patients turned away from surgery in Britain by the ransomware attack. Those hospitals may never get their data back. The last big worm like this, Conficker, infected millions of computers in almost 200 countries in 2008. We are much more dependent on software for critical functions today, and there is no guarantee there will be a kill switch next time.
The trouble is that n can be large; the support costs could thus be unbounded.

Can we bound n? Two things are very clear. First, in complex software no one will ever find the last bug. As Fred Brooks noted many years ago, in a complex program patches introduce their own, new bugs. Second, achieving a significant improvement in a product’s security generally requires a new architecture and a lot of changed code. It’s not a patch, it’s a new release. In other words, the most secure current version of Windows XP is better known as Windows 10. You cannot patch your way to security.

Another problem is that n is very different for different environments. An ordinary desktop PC may last five or six years; a car can last decades. Furthermore, while smart toys are relatively unimportant (except, of course, to the heart-broken child and hence to his or her parents), computers embedded in MRI machines must work, and work for many years.

Historically, the software industry has never supported releases indefinitely. That made sense back when mainframes walked the earth; it’s a lot less clear today when software controls everything from cars to light bulbs. In addition, while Microsoft, Google, and Apple are rich and can afford the costs, small developers may not be able to. For that matter, they may not still be in business, or may not be findable.

If software companies can’t pay, perhaps patching should be funded through general tax revenues. The cost is, as noted, society-wide; why shouldn’t society pay for it? As a perhaps more palatable alternative, perhaps costs to patch old software should be covered by something like the EPA Superfund for cleaning up toxic waste sites. But who should fund the software superfund? Is there a good analog to the potential polluters pay principle? A tax on software? On computers or IoT devices? It’s worth noting that it isn’t easy to simply say "so-and-so will pay for fixes". Coming up to speed on a code base is neither quick nor easy, and companies would have to deposit with an escrow agent not just complete source and documentation trees but also a complete build environment—compiling a complex software product takes a great deal of infrastructure.

We could outsource the problem, of course: make software companies liable for security problems for some number of years after shipment; that term could vary for different classes of software. Today, software is generally licensed with provisions that absolve the vendor of all liability. That would have to change. Some companies would buy insurance; others would self-insure. Either way, we’re letting the market set the cost, including the cost of keeping a build environment around. The subject of software liability is complex and I won’t try to summarize it here; let it suffice to say that it’s not a simple solution nor one without significant side-effects, including on innovation. And we still have to cope with the vanished vendor problem.

There are, then, four basic choices. We can demand that vendors pay, even many years after the software has shipped. We can set up some sort of insurance system, whether run by the government or by the private sector. We can pay out of general revenues. If none of those work, we’ll pay, as a society, for security failures.

https://www.cs.columbia.edu/~smb/blog/2017-05/2017-05-16.html