Why don’t our new privacy laws really protect our privacy?  Are we going about this the wrong way?

The topic was raised in the recent Capital Forum of state and federal enforcement agencies by FTC Chair Lena Kahn, who proposed that the enforcers need new privacy “tool kit” to replace the concepts of notice and consent in current privacy law. The current shortcomings also struck me when reading a recent article by Northeastern University Law professor Ari Ezra Waldman that accuses privacy professionals of complicity in destroying consumer privacy.

For hardline privacy protection advocates like Kahn and Waldman, who have been able to opine from their academic chairs, Big Tech companies are bad actors that need their wings clipped. The clippers are not brutal enough and the people wielding them are not strong enough to carry out the job.

But the problems run deeper and the solutions, if there are any, will be difficult and complicated.

One of the first problems in building an effective privacy regime is definitional. What is private information, whose data is protected, and who are we protecting it from? We do not have a national consensus on any of these questions, and consensus will be required to change the system.  I will not dig deeply into how to define protectable private data because I’ve written entire columns on this topic, but remember that data is simply a reporting of history – something that happened in the real world – and that information about real world events does not belong to anyone. Our society finds that some activities behind closed doors or not disclosed to others can be protected as private, but what about all the rest? If you buy a shirt online, doesn’t the information belong to the retailer too? And the card company facilitating payments? And the delivery company? There are very few true historic facts that we are willing to lock away from the people who participated in the event the data describes.

What is private information, whose data is protected, and who are we protecting it from? We do not have a national consensus on any of these questions, and consensus will be required to change the system.

Will these protections be available to every person in every situation? Doing something embarrassing at home is likely protectable in most circumstances. Doing the same thing in the middle of the town square is probably not protected as private. For centuries we have respected “privacy by obscurity,” but now that actions can be tied to faces more easily, people in the public square may not have the same rights. The US (and EU) enforcement community seems keen to protect your information from big tech companies, but our founding fathers clearly thought the worse threat was the entity with monopoly on coercive power – the government. While some EU legal cases have used privacy laws to restrict government action, the US courts have been hesitant to do so. And what of other actors, from your ex-spouse to your neighbors to your boss to your neighborhood watch? Our privacy regime is not broad enough to encompass them. For some reason we only seem worried about protecting privacy from corporations who have minimal contact with you.  We provide fewer restrictions to other problematic actors.  

Another problem in building an effective privacy regime is historic. Over the first 20 years of the Internet we allowed entire industries and several of the world’s most valuable companies to build on a base of data analytics. Google, Facebook, Amazon, and to a lesser extent Microsoft, Apple, Verizon, have grown entire multi-billion dollar businesses out of capturing, analyzing, and making judgments based on information about their individual users. Once that genie escaped the lamp, it will be difficult to box it in. We are just now learning of some of the profit streams created by the enormous data ocean that we permitted to grow. Can we reduce its size? Certainly. But it is unlikely that we can organize a sensible privacy protection regime that will drain an already-extant data ocean. 

Increasing use of AI will make the job harder, as data is further sliced, derived and differentiated. So is technology creeping from the internet space into our wet-world lives. Any item of technology that you can talk to is listening more than you know. We are filling our lives with wearables, cameras, and home assistants that dig much deeper into our private lives than the old desk-top ever could. It may be too late to effectively implement deep and across-the-board privacy protections for individuals.

Structural problems also hinder effective privacy protection. We have aimed our laws at slowing use of private data by the people who collect it from us, and to stop them from transferring that data to others. However, the worst non-governmental data abuses are generally perpetrated by companies two, three and ten steps away from the collection process. An entire industry of data aggregators sells truckloads of information about us, and few legislators have acted to rein them in. Vermont demands they register with the state, but few other restrictions bind them, and data collecting companies are easily finding ways to push their consumer information further down the line.

An entire industry of data aggregators sells truckloads of information about us, and few legislators have acted to rein them in.

So, if the current data protection paradigm will never be adequate, what can be done? The first questions is whether a privacy regime should be based on a consumer’s choices and perceptions? Our state privacy laws are built around the concept of consumer choice, but that method of operations assumes that consumers understand the choices they are making. Frequently, checking one box to open an immediate benefit can count as permission for a lifetime of privacy intrusion. The EU has tried to mandate specificity in privacy options, but that regime still leaves huge gaps. Can we trust companies enough to provide clear, unambiguous choices (or can we write laws specific enough to force them to do so), and can we trust consumers enough to make these choices for themselves, knowing the decision of the moment is difficult to overturn later?

On the US Constitutional side, we have relied on a citizen’s reasonable expectation of privacy to determine her rights.  But if the surveillance environment changes so that we are being monitored and recorded all the time and we know it, are we not losing any reasonable expectation of privacy?  If so, does this mean we lose all constitutional protection for privacy, or does it mean we change to a more clearly objective standard to protect our basic rights from government intrusion? One or the other must be true.

Europe has tried to implement a more objective standard of privacy, and it is playing havoc with government and company operations. Given the historical head start of tech data collecting in this world, major economic disruption may be a necessary element of protecting personal privacy. Do we have the stomach for that sort of change, or will expect to crawl along with clearly inadequate data protections? A major jolt to the system could be the only way to make a new paradigm work.