Abstract
Excerpted From: Chris Chambers Goodman, Algorithmic Bias and Accountability: The Double B(l)ind for Marginalized Job Applicants, 96 University of Colorado Law Review 501 (2025) (243 Footnotes) (Full Document)
“Help Wanted. Experience Necessary. Apply Online.” The one-click nature of online employment applications means that organizations receive an exponentially higher number of résumés than ever before. While the wealth of résumés creates opportunities for a broader and deeper exploration of the available talent pools for each position, the sheer numbers can overwhelm any human, human-resources professional. The increasing abilities of artificial intelligence (AI) technologies, including machine learning and large language models, provide effective mechanisms for funneling applicants into tranches of “qualified” or “highly qualified” and separating out the un- or under-qualified. Because of the economic practicalities, employers increasingly use AI technologies to assist with or make hiring decisions. Employers expect these technologies to quickly and accurately assess candidates' merit.
In addition to the economic savings of using AI technologies, some employers believe that AI also promotes greater fairness and nondiscrimination in the hiring process. However, it is important for employers to understand that some biases are imbedded in AI technologies, and failing to account for those biases can result in large-scale hiring discrimination. In fact, the Equal Employment Opportunity Commission (EEOC) stepped in to issue guidance to all employers in May 2023, which is discussed below.
The Biden Administration was also concerned with biases lurking in AI technologies. On October 23, 2023, President Biden issued an executive order (“2023 AI EO”), which aimed at accelerating the federal government's efforts to establish standards for AI development, use, safety, and security. In response, the Office of Management and Budget (OMB) issued Draft Guidance in March 2024 applicable to all federal government agencies using AI technologies that impact rights or safety. The OMB Draft Guidance requires federal agencies to conduct an annual inventory of all AI use cases, identify which are rights- or safety-impacting, and detail practices to mitigate risk for those uses. The OMB also issued a memorandum with additional details about rights- and safety-impacting AI uses by federal agencies and established new requirements for risk management and governance of AI technologies, including designating an AI Officer and developing plans for managing risk in the face of innovation.
Another important concern is the clash between fairness and privacy. This Article uses the term “privacy,” as defined by Gupta et al., as a “shorthand to refer to informational privacy, namely the right of individuals to have a meaningful say in the way data about them is collected, stored, and used.” Additionally, it recognizes that there are other aspects, like the right to be left alone.
Privacy is (still) a right. Fair treatment is a right. Violations of these rights can produce emotional, physical, and economic harms. Harm can also result from conflicts among these rights. For instance, when employers use protected characteristics to screen out applicants, such as height and weight for prison guards, disparate impacts are clearer: employers see fewer successful female applicants. With machine learning processes, however, we do not know the basis for screening; algorithms analyze hundreds of thousands of data points when determining which résumés are passed along for further review. Compelling applicants to submit to AI screening invades privacy by requiring disclosures that, while appearing facially neutral, can actually have a substantial discriminatory impact.
Consider an example of an applicant screening tool designed to determine which employees are most likely to be productive and have exemplary attendance records. One way to predict future attendance could be to measure how often the employee stays away from work because of illness or other health reasons. In the United States, it could violate the Americans with Disabilities Act if an employer refused to hire individuals with illnesses, or it could violate the Pregnancy Discrimination Act on the grounds that those individuals would be more likely to use more sick days. Programmers can program the algorithm to avoid screening out based on characteristics that would violate these federal laws, but that does not mean that the algorithm will not use information about illness or potential pregnancy in making its determination. The machine learning process, after evaluating large amounts of data, is very likely to find proxy factors to consider that have the effect of screening people out based on their health or likelihood of pregnancy or pregnancy complications. These proxies for poor attendance or bad health may be indiscernible to the employer using the technology, yet result in real harm.
Instead, we might test for fairness of process, which we can measure at each stage--from recruiting, to application consideration, to interviewing and callback consideration--without disclosing private information about outcomes. Under this sort of “process-defect theory,” processes that are not fair to all violate the fair treatment principle. With AI technologies in the hiring arena, processes are opaque and often outright inscrutable, and thus it is virtually impossible to evaluate fairness in this way.
Implementing AI in hiring processes can inadvertently lead to issues related to discrimination and privacy. This Article explores the double bind that impacts people of color and others with nontraditional or nonstandard backgrounds in job-seeking spaces dominated by AI technologies. To alleviate concerns about fairness by permitting auditing of outcomes, this Article argues applicants must not only disclose but also agree to retention of private information. Without such disclosure and retention of data, the machines may continue to learn in ways that exacerbate, rather than alleviate, biases in hiring.
Based on remarks made by legal scholars at the 2024 Rothgerber Conference: AI and the Constitution at the University of Colorado Law School on Friday, April 19, 2024, during the AI and Privacy panel, this Article proceeds as follows: Part I provides some background on how employers are using AI technologies and highlights concerns with AI-assisted employment processes. Part II describes efforts by the executive branch to regulate AI systems and some of the limits in the federal arena. Part III highlights recent state and local attempts to regulate AI for the first time and focuses on a 2023 New York City ordinance. From there, it explores ways to build upon and improve that start. Part IV concludes the Article with additional recommendations for double-blinding data to optimize the balance between privacy and fairness. For instance, organizations that use AI tools to sort and hire job seekers should consider conducting an ethical risk assessment followed by a bias risk assessment. These organizations should then make any needed adjustment before deploying AI in their hiring processes. This assessment should be an iterative process to ensure that the AI tool does not perpetuate bias and prevent the organization from hiring diverse candidates.
[. . .]
Employers using AEDTs should be aware that they must assess the tools in context--considering the surrounding circumstances, including programming, people, and products. Users from marginalized groups, such as people of color, should be aware that they are under additional pressure to provide data and permit retention because doing so can provide the only mechanism, though imperfect, for addressing fairness between racial, ethnic, and other groups.
So, what can employers do?
(1) Open the AEDT development and deployment processes to diverse creators, testers, and end-users.
(2) Watch for discriminatory impacts on unrepresented and underrepresented groups.
(3) Train your AEDTs. Audit them. Adjust them. Then, deploy them.
Potential employees should be aware of the double bind--the risks and rewards of guarding personal data. Applicants should ask whether a potential employer is using AEDTs, and if so, ask how and in what context. They should question employers about bias and ethical risk assessments, audits, and their outcomes. Applicants should inquire about opt-out provisions and how to request the return or destruction of their individual data after the job search process ends. Data submitted to one potential employer may be stored and even shared with any and every future employer, unless and until there are more guardrails in place to curtail data abuses. But applicants should also recognize that blinding their data can hinder antidiscrimination goals.
Happy (Job) Hunting.
Professor of Law, Pepperdine Caruso School of Law; J.D. Stanford Law School; A.B. cum laude Harvard College.