In May 2022, the ICO fined Clearview AI more than $7.5 million for scraping images from social media to create a global online facial recognition databaseThis case has since caused further debate around the use of live facial recognition technology in public spaces, including people's right to privacy and the application of data protection laws in cross-border scenariosThis article addresses some of these issues and provides practical steps for businesses looking to implement the latest technological developments.

Current regulation of live facial recognition technology

Due to the cost of living crisis, there has been a sharp rise in retail crime across the countryAccording to the British Retail Consortium (BRC), incidents of retail theft had increased by an average of 27% across the 10 largest cities in the UK. The BRC's 2023 Crime Survey put the scale of retail theft at £953m. This is despite retailers having invested in over £700m in crime prevention. 

The UK government have opted not to impose legislation to regulate AI; instead promoting a 'pro-innovation' approachThe UK Home Office has however been criticised for backing the use of live facial recognition technology by retailers to identify known shoplifters using images stored on the police national computer.

This position is in contrast to the EU's current approach in its draft AI Act, as it proposes to ban the use of facial recognition technology in public spaces for surveillanceThe only exception to this will be where it is strictly necessary for law enforcement purposesWhen used, it will be subject to stringent safeguards and only allowed for specific purposes, such as the prevention of a specific terrorist threat. 

Up until recently, US law did not discourage the use of facial recognition for security purposes. Whilst there are now some states with biometric capture laws (including Illinois, Texas and New York) and some cities with facial recognition laws (San Francisco being the first city to ban police use of face recognition), the rules are inconsistentMore clarity is expected as cases involving the use of live facial recognition technology start to move through the court system.

ICO's investigations into Clearview & Facewatch

In April 2023, the ICO declared that Facewatch (a facial recognition system for retail security in the UK) was compliant with UK data protection laws. The ICO concluded that Facewatch's services achieve the legitimate purpose of detecting and preventing crime. 

This is in contrast to how the ICO has dealt with Clearview AI, a US-based facial recognition company that scrapes personal data from the internetThe key difference between Facewatch and Clearview being: Facewatch only uses live facial recognition to identify known offenders from UK police databasesClearview however can identify anyone on its database of 40+ billion images, which are scraped from the internet (using sites including Facebook, Instagram and LinkedIn).

In May 2022, the ICO fined Clearview over £7.5 million for using images of people in the UK and other countries from social media and other websites, in order to create its global online facial recognition database. The ICO concluded that Clearview had breached GDPR rules on fair and lawful processing of personal data, and on the retention of that data. 

But, in October 2022, the UK's First Tier Tribunal (Information Rights) (the "Tribunal") ruled that Clearview's data processing activities lay outside the material scope of the GDPRAs Clearview is a US company and does not have an establishment in the UK or EU, the Tribunal found the ICO had no jurisdiction in the matter, so did not have the power to impose the fine.

Although the personal data in question included data belonging to UK residents, the Tribunal determined that the processing is not caught by the GDPR as Clearview's services were provided to non-commercial criminal law enforcement entities outside the EU and UK.

However, the ICO has since announced it will seek to appeal the judgement of the TribunalThe ICO have challenged the judgement on the grounds of:

  1. Protecting data rights of UK residents
  2. Clarifying whether 'commercial enterprises profiting from processing digital images of UK people, are entitled to claim they are engaged in "law enforcement"'.

The outcome of this appeal will provide much-needed clarification on the application of the GDPR in these cross-border scenarios and on how to balance individual privacy rights when data is scraped from the internet.

The USA's approach to the use of live facial recognition technology is more relaxed than the UK's stanceClearview are permitted to create a database of personal data scraped from the internet and use this as the basis for its live facial recognition services. Whilst these services are offered only to US law enforcement agencies, it is unclear what other organisations (such as in the retail sector) would be able to purchase Clearview's services for their own ends.

Looking ahead

If the ICO are granted permission to appeal, aside from the jurisdictional issue of Clearview having no establishment in the UK, a key decision for the Tribunal will be whether companies can use data publicly available on the internet for their own commercial gainThe ICO's initial decision was that this was unlawful and alongside a fine of £7.5 million, ordered Clearview to stop using the personal data of UK residents and delete the data of UK residents it currently holds from its systems.

If the ICO's initial decision is found to be enforceable and is upheld by the Tribunal, this will likely deter business from scraping publicly available personal data from UK sitesOn many sites however, it will be difficult to determine whether personal data relates to a UK resident as opposed to a European or American resident when harvesting this dataIt would be interesting to see therefore whether businesses proceed with data scraping and risk a fine, or take a more risk-averse approach and only scrape data from sites where the residency of the data subject is known.

Practical tips for businesses

If your company uses facial recognition software, you must ensure that you have a good understanding of how the underlying technology works, the potential impact of the software on individuals and what safeguards need to be implemented to prevent individuals from harm. These include:

  • Undertaking a Data Protection Impact Assessment before any live facial recognition software is used
  • Robustly vetting your chosen tool to ensure you are aware from the outset of any weaknesses, including risks of discrimination – this is key to ensure organisations are able to address these potential weaknesses including through human oversight
  • Where businesses operate within the EU market, you must bear in mind that live facial recognition software will be prohibited (except in specific circumstances) under the EU's new AI Act when it comes into forceIn the UK market however, we can look to the EU's AI Act for guidance on good practice, such as: 
    • Understanding where your organisation sits in the value chain – will you be a provider or user of services? 
    • Keeping up-to-date records of current AI models used
    • Maintaining detailed records of the content used to train an AI model
    • Ensuring the AI complies with existing copyright and data protection laws 
  • Ensuring you have appropriate policies in place covering how you will be using the technology, including acceptable data processing policy in place for your business
  • Being clear when facial recognition technology is in use in shops and other public spaces. As part of its privacy policy Facewatch requires any business it provides its services to, to notify customers that cameras are in operation by display signage saying that Facewatch facial recognition is in use.

If your business is considering implementing live facial recognition software, WBD has a focused retail sector group and a dedicated team of specialist digital lawyers. We have the expertise to assist your business in making decisions regarding the latest advancements in technology.