Abstract

Excerpted From: Eldar Haber and Shai Stern, Bias Notification Duty, 42 Cardozo Arts and Entertainment Law Journal 295 (2024) (310 Footnotes) (Full Document)

 

HaberSternToday, algorithms intertwine with our lives more than ever before. Digital platforms influence--and largely control--what many individuals buy, how they communicate and to whom, which music they listen to or movies they watch, or what news they receive. Both the state and private entities use technological tools that influence or dictate decisions in almost every aspect of modern life, from marketing, housing, and hiring to criminal sentencing. In short, both private and public decisionmaking are becoming more reliant on sophisticated algorithms and machine learning techniques, which might eventually lead modern society into an automated, one might say autonomous, future.

For all of their benefits, one highly publicized concern with machine-learning and data-driven algorithms is their potential biased outcomes. While these algorithms are not inherently biased, they reflect and often amplify human bias in their output, which creeps in at various stages of development and use and stems from different sources. Due to biased human nature, algorithms are continuously proven to produce and amplify existing gender bias, racial bias, and many other forms of unfairness towards individuals in society, especially against legally protected classes.

This is not another article on bias and algorithms--at least, not in the way scholars and policymakers have addressed the topic so far. Without belittling the importance of efforts to reduce bias in algorithms or in their use after the algorithm is run, this article offers a different take on algorithmic bias and a solution that will go a few steps beyond current scholarship. What is missing from the bias puzzle is the study of bias itself, which technology now enables society to detect in ways that were unimaginable just a few short years ago.

Consider the following hypothetical thought experiment, which we call the pilot example. Say a coder designs a search engine designed to find images. When finishing, she tests the algorithm by searching “pilot,” only to discover that the search results mainly show male pilots. If the algorithm was correctly designed, then statistically, the results should reflect the male-female ratio of pilots in society accurately. Perhaps there is a good, possibly practical, reason why males prefer, or are preferred, over females to serve as airline pilots. But perhaps the reasons are embedded elsewhere--maybe most pilots choose their occupation after their military service, which biases the outcome in favor of males, who are more likely to serve? Perhaps recruiters and airlines prefer male pilots for other implicit or explicit reasons? Who knows?

Now say the coder thinks to herself, “That's not right! When someone searches for a photo of a pilot, it should reflect both males and females, along with different races, etc. Otherwise, individuals will grow up in a stereotypic and male-dominant world, which will harm females and others everywhere.” To fix such unfairness, she quickly diversifies the search results by modifying the algorithm and “correcting” the social bias within the system. The search results now show a variety of pilot photos from various races and genders, reflecting all groups in society. She releases her search engine to the world.

Although now “biased” from a statistical perspective, such a move is generally welcomed and falls within the core scholarly suggestions related to bias and AI. But as this article further argues, it also excludes important insights and issues about bias from public scrutiny. The “who knows?” question is precisely the point we are trying to make. When such action occurs behind the scenes, embedded within unseen code, potentially no one in the world has a clue that such bias exists--all but one, the coder.

The existence of this bias, even when fixed, affects coders, who should pay attention to this bias (and potentially others) and the use of datasets that could be tainted. It affects males and females, airlines, pilots, and society at large, since society needs to learn that airlines hire male pilots more than females for some reason. In a world that is still not fully automated, fixing the bias in the machine will not aid in fixing the potential bias in society, but rather merely one aspect of it. Fixing the bias algorithmically without scrutiny will make it harder to detect and thus do the opposite of what policymakers and scholars strive for. And as mentioned, perhaps the output is not biased at all but rather a reflection of intentional decision-making that is not discriminatory. But to know that, we must become aware of its existence--the “who knows” question once again.

We want to know--or at least try to know. We therefore propose that the state impose a Bias Notification Duty (BND) on companies and their employees. Our conceptual framework, outlined in Part III, can be summarized as follows: Companies that discover algorithmic bias should be legally obliged to provide notice of their findings to a selected governing body, analogous to the FTC, which will further study it. Upon scrutiny, the governing body will further evaluate the impact of such bias (if it is indeed a bias) and notify those who are and were affected by it accordingly. It could be individuals already affected by the presence of the bias; other companies that might also use similar technology or datasets, or that might suffer from similar biases otherwise; and perhaps most importantly, from our perspective, the bias should be disclosed to society as a whole--even if they are ultimately “fixed” algorithmically. The governing body must further study to understand the roots of such bias and how to address it in situations lacking algorithmic decision-making, while also raising awareness of such potential bias within such decision-making. Then, the governing body will instruct how and when the bias should be fixed.

This proposal is not blind to many of its shortcomings, such as the incentives to share sensitive data, along with many other legal and market barriers to its success, which we further discuss in Section III.C. As we further show, BNDs are in no way a perfect method to detect bias in society, and they will likely suffer from growing pains and objections from technology companies on various grounds. However, as the world becomes more algorithm-driven, technology companies might increasingly obscure bias, making it crucial to utilize BNDs as a legal tool for debiasing society. Social corporate responsibilities might become more important than before, and BNDs could nudge companies to embrace such disclosure, perhaps even voluntarily.

This article offers to change the common viewpoint on bias, algorithms, and society. It is structured as follows: Part I explains bias in both humans and algorithms. It then reviews the legal frameworks that govern bias, algorithms, and AI. Part II demonstrates the limitations of current bias regulation and recommends scholarly suggestions for mitigating associated risks. It turns to shift policymakers' attention to the blind spot in bias regulation and studies, while proposing to use technology as a sword, rather than a shield, against bias. Such a proposal is detailed in Part III, further divided into four subparts: (A) the proposal to impose a bias notification duty on companies; (B) the rationales behind such a proposal; (C) the hurdles that must be overcome to make this mechanism effective; and finally, (D) a few potential paths that society might take and the relevance of our proposals to shape them.

 

[. . .]

 

Bias exists almost everywhere we go, and algorithms can make it worse. But while scholars and policymakers devote their attention to eliminating or reducing bias within technology, we propose a different view: using the power of technology to discover and unveil injustice in society and work to fix it in real life, not just within the algorithms. We are aware that our proposal has many challenges that should be further scrutinized before regulating such a duty. Our proposal is modest in that sense--to lay the groundwork for a different way of thinking concerning bias, technology, and society.

The imposition of BNDs on technology companies and their employees provides an opportunity not only to increase awareness of a specific algorithmic bias but also to further the industry's, as well as society's, understanding of biases and discrimination practices. It provides us with a glimpse into the backstage of algorithmic-based processes in a period of transition from human-based decision-making to algorithmic decision-making. It is now, when reliance on technology is expanding but there is still human involvement, that identifying and understanding biases is most important.

While BNDs entail advantages to individuals, technology companies, industries and society, they should be carefully designed to allow technology companies to thrive and continue their development. They should be considered a complementary mechanism to incentivize companies to identify and disclose biases without fearing that they will incur liability. BNDs, in this sense, provide a balanced vision of accountability and transparency, one that combines both top-down and bottom-up approaches. They, therefore, contribute to more fair and responsible algorithmic governance and, mainly, a better society.


Associate Professor, Faculty of Law, University of Haifa; Director, The Haifa Center for Law & Technology (HCLT), Faculty of Law, University of Haifa; Faculty, Center for Cyber, Law and Policy (CCLP), University of Haifa.

Associate Professor, Faculty of Law, University of Bar-Ilan.