Eyes are important, don’t get me wrong. So are ears, noses, tongues, fingers, balance calibration organs and everything else that feeds that massive brain of yours.1 Salinity detectors in narwhals, electrical sensors in freshwater bottom feeders, echolocation in bats all provide sensory input that humans couldn’t adequately process. Every beast has its own senses relevant to its own living conditions.

Even your smartphone has cameras, microphones, gyroscopes, an accelerometer, a magnetometer, interfaces for phone/GPS/Bluetooth/WiFi, and some have a barometer, proximity sensors, and ambient light sensors. Biometric sensing equipment in today’s phones can include optical, capacitive or ultrasonic fingerprint readers and an infrared map sensor for faces.

But without some way to organize and make sense of the signals captured by these many input devices, the entire system isn’t worth much. Seeing an angry, massive bull rushing at you won’t make much difference unless you can turn that information into an appropriate action – like running away or quickly climbing a tree.

I raise this issue because, while the introduction of millions of surveillance cameras has received significant hand-wringing press in recent years, the real advance in surveillance is being implemented more quietly. Software and machine learning systems that can make sense of billions of inputs have been introduced and are improving every day, and it is this capability that makes societal surveillance so dangerously efficient and so threatening to our privacy.

In February, Wired published a story about Genetec’s Citigraf software which had been successfully implemented in Chicago. The software uses a “correlation engine” that reads not only the city’s sensor feeds for live audio (gunshot sensors, 911 calls) and video (license plate readers, traffic cameras) but also historical police records for patterns and connections, presenting a real time picture of events occurring and interpretations of which events require official attention.

When the author, reviewing from a Genetec showroom thousands of miles away, clicks on an icon representing an assault in the neighborhood he was watching, “Seconds later, a long list of possible leads appeared onscreen, including a lineup of individuals previously arrested in the neighborhood for violent crimes, the home addresses of parolees living nearby, a catalog of similar recent 911 calls, photographs and license plate numbers of vehicles that had been detected speeding away from the scene, and video feeds from any cameras that might have picked up evidence of the crime itself, including those mounted on passing buses and trains. More than enough information, in other words, for an officer to respond to that original 911 call with a nearly telepathic sense of what has just unfolded.”

The brains of beasts take raw visual and audio inputs, combine them with previous knowledge and experience, and create actionable intelligence. Now Chicago police have a collective brain to do the same thing with the inputs from the city’s eyes and ears and the knowledge and experience saved in service databases, so when officers are called to a scene, they have not only a three-word description of the incident, but a full contextual universe of information to draw on. This makes law enforcement safer and more efficient, but it also raises privacy concerns for anyone whose name/face is connected with a crime by the computer.

In a city that has seen more killings than New York and Los Angeles combined in recent years, privacy may be less of a concern than improving police accuracy and responsiveness. Using gunshot recognition software, Chicago police “nerds” can sometimes find footage of a shooter before all the shots have been fired. But the ACLU has asked the city of Chicago to place a moratorium on deployment of further camera until a privacy review has been conducted.  I would suggest that it isn’t the cameras that are the privacy concern. Instead it is the deep analytical software combining their views with tons of related data. 

The EU has a law that allows people to legally object to decisions affecting their lives if those decisions were made by a machine. This month, the EU expects to publicize detailed plans for regulating artificial intelligence, including rules limiting the use of biometric surveillance technologies in policing and surveillance brains like Citigraf used in Chicago. It will be interesting to watch how the rush to apply all new technologies in law enforcement situations will fare when it hits the brick wall of EU privacy requirements. Thus far, law enforcement has largely escaped serious censure, but that may soon change. 

London’s police department – fully Brexited, but still tied to EU privacy rules – announced in January that it would use facial recognition software to proactively spot criminal suspects on the massive video network installed in that capital city. An international melting pot with a history of terrorist attacks, London has instituted aggressive surveillance. According to the New York Times, London’s new system “can immediately identify people on a police watch list as soon as they are filmed on a video camera.” The UK’s top privacy regulator has noted concerns about the system, in particular whether the London police had compared the need to use this intrusive data against the damage it would inflict on the privacy of citizens.

Of course, none of these concerns are relevant in China, where their extensive surveillance network supports a social scoring system to keep residents in line with government requirements and preferences. The Chinese government has combined the eyes and ears of its network with sophisticated AI software that can not only recognize faces and promptly place social demerits on a citizen’s account – demerits that may cost that citizen a job, an apartment or permission to start a family – but can call government enforcers to take the citizen’s freedom. According to a report by the Brooking Institution, “Despite a high degree of concern about Chinese surveillance technology, current policy discourse in the U.S. and abroad may actually have underestimated the scope and speed of its spread . . . [The Chinese technologies] involve a data integration and analytics platform that supports one or more high-tech command-and-control centers. The platform collects, integrates, and analyzes data from a wide range of sources, such as criminal records, other government databases, networked surveillance cameras, facial and license plate recognition applications, and other sources.” The study notes that Chinese companies linked to the government have been sanctioned abroad for “the implementation of China’s campaign of repression, mass arbitrary detention and high-technology surveillance.”

So is this our future? As the sensing equipment expands everywhere will the software/AI brains combine sensory inputs with warehouses of data to not only watch our every move, but evaluate each move in the filter of government approval or disfavor? Once the systems are in place, it will be difficult to roll them back, so we should demand accountability now, before the beast grows beyond our ability to control it.


1 I assume that anyone reading this blog clearly possesses a prodigious intellect, as well as taste, refinement and an admirable character apparent to all.