Abstract
Excerpted From: Louise Grégoire, Law Enforcement Use of Facial Recognition--A Comparative Approach Between the United States and Europe to Tackle the Racial Bias of Facial Recognition Against People of Color, 39 American University International Law Review 415 (2024) (154 Footnotes) (Full Document)
In a dystopian world, George Orwell described Oceania's citizens' lives under a totalitarian regime where citizens are under surveillance by the regime in place 24/7 and human rights are restricted, and even nonexistent. Although such regimes were originally based on fictional worlds, what the author described in 1949 has become reality. Technology has become extremely present in our daily lives, being an efficient tool for governments and law enforcement to monitor crowds and determine whether an individual is a potential suspect that the police are looking for.
Facial recognition technology (FRT) symbolizes the development of technology in our society. FRT might seem overly intrusive, but it may benefit public safety, identification, and arrest of criminal suspects. Since the twenty-first century, FRT has been heavily used by law enforcement. Despite its benefits, this technology raises several disturbing issues, especially regarding its inaccuracy. FRT is known for misidentifying people, especially people of color. In November 2022, a Black man from Georgia, Randall Reid, was arrested for stealing high-end Chanel and Louis Vuitton bags and was locked up for nearly a year. Despite Reid never having been to Louisiana, where the incident occurred, facial recognition misidentified Reid as the suspect. Unfortunately over the years, there have been similar stories of FRT's misidentification of Black people. As law enforcement has been denounced for racism against people of color in Europe and the United States, the continued use of FRT will likely further perpetuate racial biases.
This article will focus on the use of FRT by law enforcement in the United States and in Europe and the racial biases involved in the use of the technology. Part II will focus on how bias originates from codes and datasets and how the inaccuracy of facial recognition disproportionately affects people of color and reinforces discrimination. Part III will discuss the implications on human rights and how the rights of people of color are especially targeted. Part IV will examine the current regulations and frameworks at the national and regional levels to tackle discrimination and highlight their insufficiencies in tackling the impact of these technologies on people of color. Finally, Part V will propose the legal possibilities for state institutions to better limit the racial bias of facial recognition.
[. . .]
Despite the widespread use of Facial Recognition Technology, there has not been much State oversight to control and prevent harm to people, especially people of color, who are particularly targeted by facial recognition used by law enforcement. Regulations are necessary to prevent human rights violations. The rights of people of color and all people in general must be protected from violation. Therefore, States must act to meet their human rights requirements.
Ms. Grégoire is a LL.M. graduate from the University of Maryland, Francis King Carey School of Law. She also earned a Master's degree in English and North American Business Law from the University Paris 1 Pantheon - La Sorbonne and a Master's degree and Bachelor's degree from the University of Tours.