Europe’s Next Steps in Regulating Facial Recognition Technology

The EU is well-known globally for its drive to secure personal privacy along new digital frontiers.  The much-discussed GDPR included some protections for biometric information including facial recognition data.  This year, an internal debate is playing out in the EU about how far to go in addressing burgeoning uses of facial recognition - specifically, whether regulating the technology as “high risk” is sufficient, or a total ban on some uses should be required.

The EU may soon pick a direction to pursue in its regulation of facial recognition technologies. Photo: Wikimedia Commons

By: Eileen Li, Staffer

 

Facial recognition technology has frequently been in the news in 2021 as an advent of the future’s arrival and a potential source of surveillance fears.  In the UK, schools are using the technology to make lunch lines faster; in Moscow, “Face Pay” now allows you to ride the subway.  The global facial recognition market, valued at $3.97 billion in 2018, is estimated to reach $10.15 billion by 2025.  

Jurisdictions around the world, including the traditionally privacy-protective European Union, are grappling with the difficult task of setting boundaries around the use of this new and potentially invasive technology. 

How Facial Recognition Technology Works

Facial recognition technology generally functions through four steps.  First, a camera detects or recognizes a human face in its view.  Next, the camera takes a photo of the face and analyzes its nodal points.  Third, the technology converts this nodal point analysis into a numerical code called a “faceprint,” unique to each person.  Finally, the system scans a database for a match with this unique faceprint and thus identifies a person - the database could also tie a face to other information, such as a passport number or bank account.  

This Bulletin will explore how the continued development of facial recognition technologies has pushed the EU regulatory debate forward, from the 2016 General Data Protection Regulation (GDPR) to this year’s new Proposal for Artificial Intelligence (AI) Regulation.

The GDPR Approach

Biometric data falls under the GDPR’s definition of personal data, which includes any information that can identify a natural person by reference to “factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”  The GDPR thus mandates that the biometric data of EU citizens and long-term residents cannot be shared with third parties without their consent. 

The GDPR does contain some exceptions to this requirement - for example, if the information is necessary for employment, social security, and social protection law, or if it is crucial to the public interest in public health.  Individuals can also explicitly consent to share their information with third parties. 

April 2021 European Commission Proposal for AI Regulation

The proliferating AI market has raised a new debate about the regulation of facial recognition technology.  In April 2021, the European Commission released a draft proposal for the regulation of AI, which left most AI uses free from regulation but imposed heightened requirements on “high risk” AI applications.  One of the most discussed examples of high risk AI use is the “biometric identification and categorisation of natural persons.”  This category would likely encompass most facial recognition technologies, as the majority of facial recognition technologies utilize some form of “integrated deep learning and neural network” in identifying human subjects.  

While the proposed regulation bans the use of public biometric surveillance by law enforcement, a cross-party group of 40 members of the European Parliament (MEP) attacked the legislation’s overall approach as too weak in countering the broad risks posed by facial recognition technology.  The Parliament members’ letter argues that Articles 42 and 43 of the proposed legislation “not only fail to ban biometric mass surveillance,” but “could even be interpreted to create a new legal basis and thus actively enable biometric mass surveillance where it is today unlawful (e.g. under Article 9 GDPR).”  The MEPs appear to be concerned about language in Article 42, which states that AI systems “that have been trained and tested on data concerning the specific geographical, behavioural and functional setting within they are intended to be used shall be presumed to be in compliance” with the AI regulation’s data governance requirements, even if they do not fall into a specific GDPR exception.

The Reclaim Your Face advocacy coalition, formed by 60 European human rights and social justice groups, has also raised objections to the draft proposal.  While Reclaim Your Face calls the move to regulate AI a “step in the right direction”, the group critiques the proposal as too narrowly focused on law enforcement use, thereby “​​failing to prohibit equally invasive and dangerous uses by other government authorities as well as by private companies.”

European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) Joint Opinion 

In June 2021, Europe’s chief data and privacy regulators responded to the draft AI regulation.  In a joint opinion, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski stated that they did not believe the April draft regulation went far enough.  Instead, they urged a general ban on “any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context.”  The joint press release also urged the European Commission to “explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.” 

The EDPB and EDPS opinion also raised a growing concern at the intersection of surveillance law and human rights law - that biometric surveillance and identification will be used to classify individuals “based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights.”  The agencies call for a “precautionary approach” to the future of facial recognition technology - calling for the European Commission to err on the side of banning the technology rather than allowing its use in uncertain arenas.  The cost of remote biometric surveillance, they argue, is nothing less than “the end of anonymity in [publicly accessible] spaces.” 

Next Steps: European Parliament Nonbinding Resolution

On October 6th, 2021, the European Parliament passed a nonbinding resolution that bans the police use of facial recognition in public places and the creation of private facial recognition databases.  While the nonbinding resolution does not technically impact the draft AI regulation, it likely signals the European Parliament’s view on the need for stringent facial recognition prohibitions. 

The draft AI regulation will continue to undergo debate and review by the European Parliament and the Member States through the European Council.  While there is no clear timeline for the regulation’s adoption, some news accounts have suggested that it could be “several years before the regulation is ratified and comes into force.”  In the meantime, the delicate balance between privacy, innovation, and surveillance will undoubtedly continue to be debated in the European Parliament and in other jurisdictions. 

Eileen Li is a second-year student at Columbia Law School and a Staff member of the Columbia Journal of Transnational Law.  She graduated from the University of Chicago in 2018.  

 
Tanner J. Wadsworth