In a prior blog post, I discussed what would happen when artificial intelligence is applied to industries that had a history of discriminatory practices. Would AI make the world a more just place? Or would it more systemically ingrain discrimination? And finally, what could be done about it?
The first two questions remain unanswered. However, Illinois appears to be to have offered an extremely modest answer to the third. In August, Illinois Governor Pritzker signed into law the Artificial Intelligence Video Interview Act, which is set to become effective on January 1, 2020.
The Act is noteworthy in its narrowness and restraint. Ultimately it’s merely a notification and consent law. Employers that ask applicants to record video interviews that use artificial intelligence analysis must:
- Notify the applicant in writing before the interview that AI may be used to analyze the applicant’s facial expressions;
- Provide each applicant with information before the interview explaining how the AI works and what characteristics it uses to evaluate applicants; and
- Obtain written consent from the applicant to be evaluated by the AI program.
Employers may not use AI to evaluate non-consenting applicants, and they may not share applicant videos except with persons whose expertise is necessary to evaluate the applicant’s fitness for a position.
That’s it.
Illinois seems wary of AI being used to evaluate applicant microexpressions (the brief facial expressions lasting fractions of a second, believed to be more accurate reflections of our emotions and harder to fake)—at least without the applicant’s consent. Maybe that’s a good thing. Maybe an applicant should have a say in whether she wants her trustworthiness evaluated by a human instead of a cold, unfeeling program.
But even to that end, it does not seem like a particularly effective law. There’s nothing in the Act’s language that prohibits an employer from discriminating against employees if they refuse to provide consent. And, just as a practical matter, the imbalance of power between a prospective employer and an applicant makes it unlikely that the applicant wouldn’t provide consent—lest she be considered uncooperative.
More than that though, it seems like an odd place to start when restricting the use of artificial intelligence in the hiring process. AI holds the promise to increase efficiency. There have been reports that Chatbots can decrease the screening time of candidates by nearly 75%. AI also has the potential to remove certain human biases by using the exact same logic filtering.
However, as a major tech company found out a few years ago, AI also has the ability to pick up on new biases. That tech company scrapped a secret AI recruiting tool after it was filtering out nearly all female candidates. It had been programmed to look at previous high performers in the company and seek out candidates with similar backgrounds. I’m sure the programmers meant for the program to identify coding language expertise, educational backgrounds, and the like, but since the vast majority of the company’s employees were male, the company taught itself that male candidates were preferable. This all goes back to what my co-blogger Ted Claypoole wrote about in “AI as a Black Box: How Did You Decide That?” You need to know how your programs are deciding what they’re deciding.
You could say I’m fairly ambivalent about the new Illinois law. I don’t think AI evaluating microexpressions poses a significant risk (assuming that microexpressions are consistent among genders, races, and ethnic groups). But there’s little in this world that can be hurt by obtaining the other party’s informed consent.