9 November 2018
My thesis is simple: the way we protect privacy today is broken and cannot be fixed without a radical change in direction.
For almost 50 years, privacy protection has been based on the Fair Information Practice Principles (FIPPs). There are several provisions, including purpose specification and transparency, but fundamentally, the underlying principle is notice and consent: users must be informed about collection and consent to it. This is true for both the strong GDPR model and the much weaker US model: ultimately, users have to understand what's being collected and how the data will be used, and agree to it. Unfortunately, the concept no longer works (if indeed it ever did). Arthur Miller (no, not the playwright) put it this way:
A final note on access and dissemination. Excessive reliance should not be placed on what too often is viewed as a universal solvent—the concept of consent. How much attention is the average citizen going to pay to a governmental form requesting consent to record or transmit information? It is extremely unlikely that the full ramifications of the consent will be spelled out in the form; if they were, the document probably would be so complex that the average citizen would find it incomprehensible. Moreover, in many cases the consent will be coerced, not necessarily by threatening a heavy fine or imprisonment, but more subtly by requiring consent as a prerequisite to application for a federal job, contract, or subsidy.
The problem today is worse. Privacy policies are vague and ambiguous; besides, no one reads them. And given all of the embedded content on web pages, no one knows which policies to read.
What should replace notice and consent? It isn't clear. One possibility is use controls: user specify for what their information can be used, rather than who can collect it. But use controls pose their own problems. They may be too complex to use, there are continuity issues, and—at least in the US—there may be legal issues standing in their way.
I suspect that what we need is a fundamentally new paradigm. While we're at it, we should also work on a better definition of privacy harms. People in the privacy community take for granted that too much information collection is bad, but it is often hard to explain to others just what the issue is. It often seems to boil down to "creepiness factor".
These are difficult research questions. Until we have something better, we should use use controls; until we can deploy those, we need regulatory changes about how embedded content is handled. In the US, we should also clarify the FTC's authority to act against privacy violators.
None of this is easy. But our "data shadow" is growing longer every day; we need to act quickly.