Abstract
Excerpted From: Hannah van Kolfschooten, The AI Cycle of Health Inequity and Digital Ageism: Mitigating Biases Through the EU Regulatory Framework on Medical Devices, 10 Journal of Law & the Biosciences 1 (July-December, 2023) (129 Footnotes) (Full Document)
Is there enough information on the safety of medicines for older patients, when clinical trials generally exclude patients above 65 years old? Does face recognition technology still function when people develop facial wrinkles? Will it be possible in the future to access medical records without internet access? Will health professionals offer highly recommended Artificial Intelligence (AI)-based treatment to patients they assume are too old to understand technology? Empirical research shows how chronological age as a sole factor directly impacts the quality of healthcare and overall health status. In healthcare, age is not just a number.
Europe's population is aging, and at the same time, there is a growing shortage of health personnel. In response, the use of Artificial Intelligence (AI) medical devices in healthcare in the European Union (EU) is quickly growing. As life expectancy is improving and healthcare utilization increases with age, the average users of AI medical devices will predominantly be patients of older age, particularly as this group of the population is the main user of healthcare in general. What distinguishes AI medical devices from classical software medical devices is their ability to autonomously recognize patterns in big datasets and make predictions for individual patients. Many AI medical devices also make use of machine learning techniques and have the capacity to automatically evolve over time based on data input and performance assessment. Examples of AI medical devices that are currently on the EU market include software to automatically detect pulmonary nodules on chest CT scans and track their growth over time, smart hearing aids, and autonomous ophthalmology cameras to detect eye diseases. Medical AI shows promising prospects for increasing the efficiency, efficacy, and quality of medical care--but not for everyone. Its health outcomes often vary between population groups, including age groups. While AI can indeed be beneficial for personalizing healthcare for older patients, the risks for precisely this group are often overlooked.
Discrimination is one of the greatest risks posed by automated decision-making systems. Regulators worldwide are racking their brains over how to protect citizens against the harms of biased AI systems without hindering innovation. The ethical AI discourse is mainly focused on gender and racial biases and their risks for discrimination. Pervasive age-related biases in AI systems leading to ageism go however largely unnoticed and unchallenged. The lack of scholarly attention to the perception of chronological age in AI is surprising, given that ageism is the most prevalent type of discrimination according to the Eurobarometer on discrimination in the EU. Butler first coined the term 'ageism’ in the 1960s, referring to biases, stereotypes, negative attitudes, and discrimination toward older people based upon chronological age. The terms 'digital ageism’ and 'AI ageism’ are used to describe age-related biases in new technology such as AI.
The World Health Organization (WHO) is sounding the alarm about the increasing practice of ageism in healthcare in general, and in medical AI systems in particular. Ageism persists especially across healthcare settings, where older adults are commonly stereotyped as physically weak, incompetent, dependent, incapable of autonomous decision-making, or indispensable. The way in which older people experience discrimination in healthcare is also influenced by intersectional factors such as race, gender, and ethnic origin. Along the same lines, many AI devices used in healthcare show a correlation between the chronological age of the user and health outcomes.
To explain this phenomenon, this paper introduces the concept of the AI cycle of health inequity: existing practices of discrimination in healthcare are programmed into AI systems that replicate these biases in their output, creating a reinforcing loop resulting in health inequity--avoidable--and therefore unfair--systematic differences in the health status of different population groups. AI systems can generate biases in all phases of the AI lifecycle from data collection to modeling, to application in clinical practice. Biases can be both technical, for example when the training data neglect atypical presentation of disease in older adults, and contextual, for example when medical treatment requires the use of a mobile device and digital literacy of older patients is not considered in the deployment of the AI tool. As a result, AI medical devices are at risk of producing discriminatory results for older patients, posing potential risks to their health and fundamental rights protection. This issue is even more pressing now the average age of AI medical device users is quickly rising. Age discrimination is prohibited under EU antidiscrimination law. But what if the use of AI medical devices causes discrimination?
The EU has obtained an important position in the promotion and protection of nondiscrimination rights and (health) equity, resulting in a broad range of legislative and policy instruments on equal treatment and nondiscrimination. Consequently, people in the EU have a right to be protected against discrimination. At the same time, the EU has the obligation to take measures to protect the functioning of the internal market. The goal is to ensure the free movement of goods and guarantee high safety standards for consumers. For medical devices, this means that the EU sets legal safety requirements to enter the market under the EU Medical Devices Regulation (MDR). Manufacturers need to obtain certification for their products and prove the efficacy, quality, and safety of their AI medical devices. In response to societal concerns about fundamental rights violations, the European Commission has proposed new legislation to regulate AI systems, introducing minimum standards for AI systems in a horizontal AI Act. This AI Act essentially takes the same product safety approach as the MDR, but also explicitly aims to protect against AI discrimination. In the case of AI medical devices, the minimum standards proposed in the AI Act create an additional layer to the existing safety and quality standards under the EU MDR. This multilayered system of regulation for AI medical devices aims to protect both the safety, health, and fundamental rights of users--and at the same time foster innovation. However, while these aims are laudable and harmonization in this field is commendable, it remains to be seen how Member States implement these technical requirements in practice.
The issue of ageism in medical AI plays a role in all layers of EU legislation: it affects fundamental rights (namely nondiscrimination and access to healthcare), but is also an issue of internal market law, as the AI market may not always meet the health needs of older patients. Does the EU regulatory framework for medical devices protect users of AI medical devices against age-related biases and resulting discrimination?
The main objective of this paper is to offer an EU legal perspective on digital ageism in the context of AI in medical decision-making. This paper makes three contributions to the existing literature: (i) it problematizes the lack of attention to AI ageism from a medical, ethical, and legal viewpoint, (ii) it conceptualizes the relationship between biases and health discrimination in the 'AI cycle of health inequity’, and (iii) it provides a thorough legal analysis of the new EU regulatory framework for AI medical devices from the perspective of bias mitigation. While the legal analysis zooms in on age-related biases, most observations are also applicable to the wider issue of biases in AI medical devices. The legal analysis only focusses on the EU regulatory regime for AI medical devices, but its observations and conclusions may be useful for regulators in other parts of the world as well--as many regions are faced with the challenges of aging populations, persisting ageism, and regulatory questions on balancing the risks and benefits of AI medical devices. This paper does not focus on AI that was specifically designed for elderly care--also known as 'gerontechnology'--but instead investigates general AI medical devices, designed for a broad category of patients irrespective of chronological age. An explicit choice was made to refrain from further defining 'older patients' to not contribute to harmful stereotyping.
This paper proceeds as follows. First, Section II briefly discusses the main medical, ethical, and legal concerns of ageism and discrimination in healthcare. Section III explains how ageism manifests in the design and use of AI medical devices, discussing the various sources of age bias against the background of the AI cycle of health inequity. Subsequently, Sections IV and V assess the EU legislative approach to medical AI, specifically the MDR and the AI Act, evaluating the legal protection for older patients experiencing ageism. The identified limitations of the MDR for addressing age-related biases are used to guide the evaluation of the AI Act. Section V concludes that, while the EU legal framework does address the key issues related to technical biases in medical AI, it does not account for contextual biases, therefore neglecting part of the cycle of health inequity.
[. . .]
In conclusion, the existing MDR demonstrates significant shortcomings in addressing ageism in AI medical devices. These limitations primarily arise from the lack of guidance on bias assessment, clinical evaluation, and relations to other legislative frameworks, as well as limited transparency and public disclosure. The MDR's broad scope and high-level requirements for software hinder its effectiveness in accommodating the unique characteristics of AI medical devices.
The new conformity requirements introduced by the AI Act offer potential solutions, but their effectiveness is contingent upon the yet-to-be-published content of conformity standards by EU standardization bodies and approval by the European Commission. Furthermore, while the proposal emphasizes the importance of bias reduction, it primarily focuses on biases in the underlying data and lacks provisions to address contextual biases related to stereotypes, prejudices, and unconscious biases. This limits its effectiveness in combating discrimination, particularly in the application phase of AI systems. There is however some 'low-hanging fruit’ to pick--which still can be addressed in the trilogue-negotiations. The AI Act obligation to register AI systems in an AI database could be extended to health professionals, public access to clinical evaluation information in the EUDAMED database could be extended to Class IIa and IIb medical devices to increase transparency and linking the EUDAMED database with the AI database could provide a more comprehensive understanding of AI-based devices to guide medical decision-making.
These solutions do however not detract from the fact the key issue with the EU regulatory framework for AI medical devices is its narrow understanding of the challenges posed by bias in AI. Bias is regarded as an issue of patient safety and product performance. By framing the issue of bias in this manner, instead of from the perspective of fundamental rights protection and health (in)equity, the choice was made for the regulatory measures for bias mitigation to primarily address product performance thus focusing solely on the product itself, and not on the wider context in which the product is developed and used.
The AI cycle of health inequity shows that the issue of bias extends far beyond the product itself. A product safety approach to AI medical devices is insufficient for adequately mitigating biases, especially the more invisible bias of digital ageism, as part of the sources of age biases exist in the real world, external to the product. The current EU framework does address some of the key issues related to technical biases in AI medical devices by stipulating rules for performance and data quality but does not account for contextual biases, therefore neglecting an important part of the cycle of health inequity. A significant portion of digital ageism arises from how these systems are deployed, including considerations such as whether health professionals prescribe them to older patients and the level of health literacy involved.
Considering that AI is not merely a product but a complex system, the EU's regulatory paradigm, primarily designed for product regulation, requires a comprehensive system approach to effectively regulate AI. A fundamental rights approach to AI MDR would center on the impact of the device on individuals (eg the health professional and the user) in every single phase of the lifecycle of the AI medical device, rather than the product itself. At the same time, it is important to recognize that the AI cycle of health inequity resulting from age--and other--biases in AI medical devices extends beyond individual health status and individual fundamental rights protection, and in fact reinforces persisting ageism (and other forms of discrimination) in society, eventually leading to health inequity.
The MDR demonstrates inadequate adaptation to the new reality of AI-based medical devices. The AI Act proposes some improvements but fails to adequately address the health-specific nature of AI medical devices. In light of these findings, it is imperative for policymakers, regulators, and stakeholders to recognize the limitations of existing regulations and work toward a comprehensive and tailored approach to addressing ageism in AI medical devices. This entails incorporating robust guidance, promoting transparency, addressing deployment practices, and establishing a health-specific legal framework for data governance. Only through such efforts can the potential of AI technology be harnessed to ensure equitable and effective healthcare for all age groups. Today, the AI Act is only a proposal: it is now in the hands of the Parliament and the Council to ensure health equity in the AI Act.
Law Centre for Health and Life, University of Amsterdam, Amsterdam, Netherlands
Amsterdam Institute for Global Health and Development, Amsterdam, Netherlands
Corresponding author. E-mail: