Most biometric readings require your presence in the same space as the measuring tools. Facial recognition, retinal capture, fingerprints or hand geometry, even biomarked scents are measured in close physical proximity. The primary biometric tool measured from a remote location involves your voice.
This makes sense, as voice inputs and typing/clicking are our methods of remote interaction for today’s world, and while measuring typing style can provide some hints to identify a user, voice provides a richer set of inputs and a more intimate measurements of biometric quality. When you call your bank or the state department of motor vehicles, the only things the people on the other end of the line have to analyze your intentions, your priorities, and your satisfaction are the words that you speak and the voice that you use to convey those words.
As we live in a time where all commercial/work interactions seem to be measured and analyzed, your voice is a natural target. It can be used in a biometric identification sense, to match the voice to the person, name and account for greater certainty (or for criminal conviction). It can be used to measure your emotions, to see if the interaction was frustrating you or pleasing you (and then result in a mollifying offer of a coupon or a well-timed sales pitch). Or it can more deeply analyzed to determine your gender, race, weight and socio-economic status (an electronic Henry Higgins review).
As we live in a time where all commercial/work interactions seem to be measured and analyzed, your voice is a natural target.
And all of this analysis can be performed without your knowledge or consent, or, if a company cares to even attempt a justification, under the cover of “This call may be recorded for training and quality control purposes.” The fact that “training and quality control purposes” can include nearly all of the use cases described in the previous paragraph makes me suspicious that we will never be told when our voices are being used to identify or analyze us.
Some current laws would force such disclosure and specific authorization for vocal analysis of commercial telephone conversations with a company/entity call center. The privacy laws in the European Union clearly require individual opt-outs for telephonic voice recording and analysis. For example, according to Bloomberg, “Denmark’s Data Protection Authority announced April 11 [2019] that it banned the country’s largest telecom, TDC A/S, from recording customers’ calls, for training or any other purpose, until the company offers an opt-out or way to give active consent. The right to opt out in such circumstances is enshrined in the General Data Protection Regulation (GDPR). Voice recordings are seen as personal data under the GDPR, and the rules generally apply to both EU and non-EU companies that process the personal data of EU residents. … Companies operating in Europe may have to change their policies for recording calls for training purposes if other DPAs follow Denmark’s lead and enforce that part of the GDPR.”
In addition, the heavily litigated biometric law in Illinois (“BIPA”), may also be interpreted to prohibit human voice capture and analytics without the data subject’s consent. “Voiceprint” is specifically named in the definition of Biometric Identifier subject to protection and proscription under BIPA, although the Illinois legislature simply assumed that we all know what “voiceprint” means. Courts interpreting BIPA have tended to rule that simply recording a voice does not trigger the statute, but that running analytics or using the voice for identification purposes will trigger the statute. McDonald’s was recently sued in a proposed class action under BIPA for use of voice technology in some of its restaurant drive-through lanes. Given that McDonald’s is likely using the vocal capture technology to clarify orders and not to identify individual customers, it will be unlikely that the company violated BIPA. However, the case will also give us insight into whether corporate America is basing customer analytics on these voice captures.
Voiceprints are also used in court cases as evidence, but some question their value. Prisoners convicted using voice analysis have been released years later when DNA evidence contradicted interpretations of voice identification. Some defense lawyers claim that voiceprint analysis leads to the “CSI effect” where forensic evidence is given undue weight in criminal trials. Scientific American asked, “is the science behind voice identification sound? Several articles in the scientific literature have warned about the quality of one of its main applications: forensic phonetic expertise in courts. We have compiled two dozen judicial cases from around the world in which forensic phonetics were controversial. Recent figures published by INTERPOL indicate that half of forensic experts still use audio techniques that have been openly discredited.” So the effectiveness of the voice identifications have also been controversial.
Some defense lawyers claim that voiceprint analysis leads to the “CSI effect” where forensic evidence is given undue weight in criminal trials.
The privacy implications of cheap and accessible voice analysis are just starting be explored. Penn professor Joseph Turrow, author of “The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet,” has documented the early stages of “a voice-profiling revolution that companies see as integral to the future of marketing.” His recent article in The Conversation noted, “Thanks to the public’s embrace of smart speakers, intelligent car displays and voice-responsive phones – along with the rise of voice intelligence in call centers – marketers say they are on the verge of being able to use AI-assisted vocal analysis technology to achieve unprecedented insights into shoppers’ identities and inclinations. In doing so, they believe they’ll be able to circumvent the errors and fraud associated with traditional targeted advertising. Not only can people be profiled by their speech patterns, but they can also be assessed by the sound of their voices – which, according to some researchers, is unique and can reveal their feelings, personalities and even their physical characteristics.” Because most companies don’t discuss their behind-the-scenes marketing programs, and never seem to publically describe call center activities, we can only guess how many times and ways our voices are being recorded and analyzed.
While he talks to industry insiders for his book, Turrow also reviews some of the patents companies are filing for voice recognition technology, including tracking people within your home through voice signatures (Google) and using voice irregularities to remotely diagnose cold or allergies and the sell cough syrup (Amazon). Every possible tool will be used by companies to read and manipulate us. We are vulnerable in every part of ourselves that we offer to them – including our voices.