One of the biggest legal problems protecting AI users in the coming years will be accountability – dealing with the opacity of the black box and explaining decisions made by machine thinking.

Understanding the logic behind an AI finding is not an issue where AI is assisting in spotting real-world risks that affect individuals – such as the current use of AI in radiology, where failure to use AI radiology analysis may soon be considered malpractice. As long as the AI is accurate and productive in showing where cancer may exist, we don’t care how the machine picked that specific spot on the x-ray, we are just happy to have another tool that helps save lives.

But where the AI proposes treatments or outcomes, your clients – healthcare and otherwise – will need to be ready to defend those decisions.  This means an entirely different baseline organization and feature set for than the AI currently envisioned or in use.  Just knowing where to look for cancer is a blessing.  Machine-recommended treatments are only a blessing if we understand how the decisions were made and what options were considered.

Even more important is understanding AI logic in situations where institutions are making decisions that affect the future of individual humans. For example, a brilliant financially-focused AI program may decide that all people with vowels at the end of their names are bad credit risks and should not receive loans from a big bank.

However, we know that such a decision, even if correct, creates a disparate impact for persons of Italian, Irish, Latino, and Japanese descent and therefore violates the U.S. lending laws. The only way to overcome this presumption of violation with regard to a decision made on an individual would be an in-depth analysis of how the lending decision was made for that individual, and most current AI does not have the auditability or documentation to demonstrate how it reached its conclusion.

The need for transparency is even greater in the intersection of AI, human fate and public policy. In a democratic society, the public has a right to understand and evaluate how public decisions were reached.  This is why states have “Sunshine Laws” and why public hearings are held for important policy decisions.  This is why regulators send out their proposed regulations for public comment before being enacted.  The public has a right to reach conclusions on decisions that affect the common good, and no one can reach an accurate conclusion without adequate information.

Two years ago Wired Magazine dug deeply into this problem in an article based on AI algorithms used by the Wisconsin Department of Corrections to sentence convicted felons.  The Wisconsin Supreme Court held that a felon had no right to transparency about why an AI criminal sentencing tool found him to be “high risk”,” therefore imparting a long sentence—that the finding itself was transparency enough.

The author asked, without being able to explain a machine’s decision-making process,

[H]ow does a judge weigh the validity of a risk-assessment tool if she cannot understand its decision-making process? How could an appeals court know if the tool decided that socioeconomic factors, a constitutionally dubious input, determined a defendant's risk to society? Following the reasoning [from the Wisconsin Supreme Court], the court would have no choice but to abdicate a part of its responsibility to a hidden decision-making process.

And what happens when legislatures similarly abdicate their decision-making responsibility to a set of computer tools?  No one is happy when a half-way house or low-cost public housing is placed in their neighborhood. How much worse would it be for those issues to be decided by AI, rather than the county commission? New York City uses predictive algorithms for automated policing decisions on who might engage in unlawful conduct. The city uses AU to allocate fire stations, public housing, and food stamps. The public calls for transparency cannot be met.

It seems that, for much useful AI, transparency will soon become a differentiator between algorithms that meet the public need and those that can generate decisions but leave their logic in the dark. If your son or daughter is looking for a technology job that is likely to be safe from replacement by AI in the coming years, tell him/her to become an AI transparency specialist. I predict a great future in it.