Most people have a warped and deeply unrealistic understanding of data security.

There is no such thing as absolute security. For a thing to have value, you must be able to access the value – in effect, to use it. In order to use it, you must be able to reach it. And if you can reach it, then others can reach it, even if they do so by pretending they are you.

I have written previously about the famous robbery of the Antwerp Diamond Exchange, which is one of the most secure places on earth. But the Diamond Exchange is also a working business, so someone needs to access those diamonds six floors below street level and behind many layers of security, from lasers to heat-sensing devices to seismic measurement machines. A set of people with enough time, resources, and sophistication can force access through all this security – in the case of Antwerp by enrolling as reputable diamond dealing members of the exchange for seven years.

Do we suggest that the Antwerp Diamond Exchange was negligent in its security? Absolutely not. Just because a committed, brilliant group of thieves can determine how to successfully attack those defenses does not automatically make the defenses bad, nor does it make the keeper of the defenses negligent.

Data used for business operations is much more difficult to protect than diamonds.  It can be copied without affecting the original file. Important changing data is backed up, therefore stored in multiple places. And we value quick accessibility of the data we need from anywhere on the globe, so people who reach the data – legitimately or otherwise – do not need to be physically in the same place.

In addition, since data usage is part of the everyday activity of an enterprise, the holder only has a limited amount of resources to spend on protecting the data.  Anyone who has worked in data security understands that every company has a queue – a list of things that the security team will add to data protection when it has the resources to do so.  There is always a queue.  More can always be done to make data more secure.

Making things worse, the people attacking data are more sophisticated every day.  They build better tools, attack new vulnerabilities, and devise new strategies. Most entities operate through their people, and people – with their predictable psychology – are the most significant vulnerability. Organizations are probed and attacked, sometimes millions of times a week, and they need to be successful at repulsing these attacks every time.  The cybercriminals only need one success to break into the data.

Most people don’t think of these issues when read about a data exposure incident. They lump the company holding the data – a victim of crime – in with the criminal who stole the data. Many people are not interested in the highly complex analysis to be undertaken to establish whether a company holding data was negligent or violated a duty of care when data was compromised.  They believe if their data is compromised then clearly the company holding it was at fault.

But in the U.S., assigning legal fault is more complicated than this. How will we know what a business needs to do to protect its data well enough to successfully fight a liability determination? What is the duty of care that a company owes to its customers, and is that duty the same for public address data as it is for credit card data?

A working group of the Sedona Conference has proposed a solid answer to these questions.  By its own description, the Sedona Conference is a nonpartisan, nonprofit research and educational institute dedicated to the advanced study of specific law and policy, including privacy and data security law. The Conference has just published a set of commentary on a reasonable security test. The paper is worth reading.

The paper is brilliant at articulating the problems raised by trying to find a “reasonable” set of standards for companies to meet for legal compliance. Noting that all businesses are different – different data to protect, different resources, different technology, and different levels of risk – the paper makes the crucial observation that an aspirational set of guidelines like that provided by the National Institute of Standards in Technology (NIST) can be helpful frameworks, they DO NOT define what is reasonable for a business to adopt.  The NIST security frameworks could be used by the U.S. Defense Department but would be prohibitively sophisticated and expensive for most small- to medium-sized businesses.

Acting like published ‘best of breed’ standards are the basis of a reasonableness standard for business is a false equivalency that even the California Attorney General’s Office has suffered from in public documents.  By suggesting that companies should meet an aspirational standard that few either need or could afford can lead to unfair findings of liability if a well-defended business still suffers a data breach and is sued for its trouble.

The paper recognizes this problem, observing, “California’s guidance describes the measures specified in the Center for Internet Security’s Critical Security Controls as furnishing the minimum security measures that the California Attorney General believed to be necessary ingredients of reasonable security.[1] Yet, because it is keyed to an identified set of 20 controls, the guidance is both cumbersome and static. In sum, regulatory guidance has not provided a test for determining reasonableness.” The paper doesn’t address the danger in California’s embrace of a cumbersome and static standard – that plaintiff’s lawyers and even the state itself can now use the endorsement of this standard to bamboozle judges into holding all companies to these ‘standards’, regardless of how applicable they might be to a real business with a real budget.

The Sedona Conference paper notes that courts have already started applying standard negligence rules to data security breaches, necessitating a determination of whether the defendant practiced reasonable security measures. But finding such a test for reasonableness is both important and difficult. Illustrating only part of the complexity, the paper states, “Cybersecurity “reasonableness” crosses both legal and technology issues. Reasonable security is a standard that both legal and technology professionals seek to apply. It can be difficult for information technology (IT) organizations to understand how to apply legal concepts to their organizations; it is similarly difficult for lawyers, compliance/risk professionals, and even judges to understand IT well enough to apply it to the legal concepts they know. A reasonableness test would help to bridge that divide.”

Over the coming weeks, I will return to the Sedona Conference commentary on a reasonable security test. Addressing this issue is crucial to managing the legal consequences of data exposure incidents in a rational way.