Abstract
Excerpted From: Simon R. Graf, The Sins of the Father: Excising Malignant Bias from Artificial Intelligence, 19 Journal of Business & Technology Law 401 (2024) (297 Footnotes) (Full Document)
Artificial Intelligence (AI) has permeated nearly every pore of our society, from autonomous vehicles to digital assistants to facial recognition systems. AI is a highly technical discipline, the inner workings of which are often opaque, withheld from the public on a proprietary basis, or otherwise inaccessible. Academically speaking, “Artificial Intelligence” is the study of how to make computers emulate actions and behaviors that we associate with human thinking, such as “decision-making, problem solving, learning,” “us[ing] language, form[ing] abstractions and concepts, solv[ing the] kinds of problems now reserved for humans, and improv[ing] themselves.” Practically speaking, “AI” is an umbrella term encompassing many distinct but related models for automating tasks and decisions that would otherwise be assigned to humans. Scientists study AI for many different reasons, including to gain a greater philosophical understanding of human thought; as a purely academic exploration of computer capabilities; to simplify or automate complex, rote, repetitive, or otherwise unpalatable tasks or decisions; and to develop systems that remove human subjectivity from decision-making.
A system that makes a decision or judgment based, at least in part, on the output of an AI algorithm is often referred to as an Automated Decision System (ADS). Although some varieties of AI are characterized by their ability to “learn,” an algorithm need not be capable of learning to fall into the category of an ADS. Indeed, the U.S. government has defined the term “Automated Decision System” to mean “any system, software, or process (including one derived from Machine Learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.” An ADS, then, can be as simple as one or more computations used to make some determination.
A thoughtful implementation of AI has great potential to simplify and expedite routine tasks and produce more consistent results compared to humans. Indeed, ADS are already employed to guide professionals in healthcare, criminal justice, actuarial science, education, employment, and more. By contrast, numerous studies, lawsuits, and high- publicity gaffes have illustrated that AI is only as “intelligent” as its developers train it to be. There is a common perception among the public--often invoked as an alluring justification for delegating public interest decisions to ADS technologies--that because AI is consistent, objective, and data-driven, it is inherently fair and bias-free. The unfortunate reality is that AI is capable of being objective and biased at the same time, and when deployed prematurely in high-stakes settings, these systems can “perpetuate harms more quickly, extensively, and systematically than human and societal biases on their own.” To make matters worse, there is no way to guarantee that an algorithm is not biased--or will not become biased in the future. To build a safe, equitable, and ethical foundation for public-facing AI algorithms, we must regulate the four interdependent “cornerstones” of trustworthy ADS: fairness, transparency, accountability, and sustainability.
This paper will explore the causes and discriminatory effects of algorithmic bias in AI and will propose a regulatory model to reduce and remedy the propagation of biased AI. First, Section I will examine the origins of the three types of algorithmic bias, as identified by the National Institute of Standards and Technology (NIST). Next, Section II will detail the two distinct manifestations of algorithmic bias and their respective consequences. Finally, Section III will propose a regulatory framework for how to protect vulnerable populations from algorithmic bias, mitigate adverse effects, and provide legal recourse for those affected.
[. . .]
NASA documentation standards manuals are incredibly detailed and rigorous, but why? Perhaps because the equipment is expensive, the instruments are sensitive, and the systems are complex, or maybe because human lives hang in the balance. As the prevalence of AI in our society has exploded, we are now seeing AI models operating at such scale that they have begun discriminating against vulnerable populations with devastating results. The damage such complicated and sensitive AI systems can inflict is inestimable and, indeed, human lives hang in the balance. Yet AI development has entered its era of “cargo cult science,” a term coined by physicist Richard Feynman to describe “practices that superficially resemble science but do not follow the scientific method.”
Ruha Benjamin proposed that the dominant ethos in AI is Facebook's original motto: “Move Fast and Break Things,” in response to which she posed the question: “What about the people and places broken in the process?” A continuing issue with AI is the degree of trust the public inherently invests in a technology they (generally) do not understand or have access to. Ed Finn, director of the Center for Science and the Imagination at the University of Arizona, described this phenomenon, arguing that “computation casts a cultural shadow that is informed by this long tradition of magical thinking.” It may be this unsettlingly blind trust in AI that prompted Donald Knuth, author of The Art of Computer Programming, to comment that “algorithms are getting too prominent in the world. It started out that computer scientists were worried nobody was listening to us. Now I'm worried that too many people are listening.”
A solution to a different problem, offered by one of Benjamin's students, is appropriate here:
To change [AI], we will have to change the people using it. To change those people, we will have to change the culture in which they - and we - live. To change that culture, we'll have to work tirelessly and relentlessly towards a radical rethinking of the way we live - and that rethinking will eventually need to involve all of us.
Until then, the best we can do is emulate Allegheny County's meticulous development practices and enact meaningful legislation to tame AI's “wild west” era. By following in the footsteps of the AI Act, incorporating directives from executive orders, and adopting thoughtful retrospective recommendations, we can devise AI regulations to combat algorithmic discrimination and prioritize fairness, transparency, accountability, and sustainability.
J.D. Candidate 2025, University of Maryland Francis King Carey School of Law.