Organizations are increasingly using machine learning models to allocate scarce resources and opportunities: For example, such models help companies sift through resumes to select candidates for interviews or help hospitals rank kidney transplant patients based on their chances of survival.
When deploying a model, users typically strive to make its predictions fair by reducing bias, which often involves techniques such as tuning the features the model uses to make decisions or adjusting the scores it produces.
But researchers from MIT and Northeastern University argue that these fairness techniques are insufficient to address structural inequities and inherent uncertainty. In a new paper, they show that structurally randomizing model decisions can improve fairness in certain situations.
For example, if multiple companies use the same machine learning model to deterministically rank candidates for job interviews without randomization, one deserving individual may end up being the lowest ranking candidate for all jobs because of the way the model evaluates the answers provided in online forms. Introducing randomization into the model’s decisions can prevent one deserving individual or group from being consistently denied a scarce resource like a job interview.
Through their analysis, the researchers found that randomization is particularly beneficial when the model’s decisions involve uncertainty or when the same groups consistently receive negative decisions.
They present a framework that can be used to introduce a certain amount of randomization into a model’s decisions by allocating resources through a weighted lottery. This method, which individuals can tailor to their own circumstances, can improve fairness without compromising the efficiency or accuracy of the model.
“Even if we could make fair predictions, should societal allocation of scarce resources and opportunities be determined solely by scores or rankings? As things scale up and more opportunities are made algorithmically, the uncertainty inherent in scores may be amplified. We show that fairness may require some sort of randomization,” said Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS) and lead author on the paper.
Jain was joined on the paper by lead author Kathleen Creel, assistant professor of philosophy and computer science, and Ashia Wilson, the Lister Brothers Career Development Professor in the Department of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS), both at Northeastern University. The research will be presented at the International Conference on Machine Learning.
Consider a claim
The study builds on an earlier paper in which the researchers explored the harms that can occur when deterministic systems are used at scale. They found that using machine learning models to deterministically allocate resources can amplify inequalities present in the training data, potentially reinforcing bias and systemic inequalities.
“Randomization is a very useful concept in statistics, and the good news is that it satisfies the need for fairness from both a systematic and individual perspective,” Wilson says.
The paper explored the question of when randomization improves fairness. They based their analysis on ideas from philosopher John Bloom, who has written about the value of using lotteries to distribute scarce resources in a way that respects all of the needs of individuals.
A person’s demand for a scarce resource like a kidney transplant can arise from merit, entitlement or need. For example, every person has a right to life, and a demand for a kidney transplant can arise from that right, Wilson explains.
“If we accept that people have different claims to these scarce resources, fairness requires that we respect all of those individual claims. Would it be fair to always give resources to those with stronger claims?” Jain says.
Such deterministic allocations can cause systematic exclusion or exacerbate patterned inequalities, which occur when receiving an allocation once makes an individual more likely to receive an allocation in the future. Furthermore, machine learning models can make mistakes, and a deterministic approach could repeat the same mistakes.
Randomization can overcome these problems, but that doesn’t mean that every decision the model makes needs to be equally randomized.
Structured Randomization
Researchers use a weighted lottery to adjust the level of randomization based on the amount of uncertainty included in the model’s decisions: decisions with less uncertainty should incorporate more randomization.
“Kidney allocation is typically planned around projected lifespan, which is very uncertain. If the age difference between two patients is just five years, it becomes much harder to measure. We want to take advantage of that level of uncertainty and adjust the randomization,” Wilson says.
The researchers used statistical uncertainty quantification methods to determine how much randomization would be needed in different situations. They showed that adjusted randomization could potentially produce fairer outcomes for individuals without significantly affecting the utility or validity of the model.
“There must be a balance between overall utility and respecting individuals’ rights to receive scarce resources, but often the trade-off is relatively small,” Wilson says.
But the researchers stress that there are some situations, such as in criminal justice, where randomizing decisions does not improve fairness and may even harm individuals.
But there may be other areas where randomization could improve fairness, such as college admissions, and the researchers plan to explore other use cases in future work. They also want to explore how randomization affects other factors, such as competition and price, and how it can be used to improve the robustness of machine learning models.
“We hope that our paper is a first step towards showing that randomization may have benefits. We are providing randomization as a tool. How much randomization to use is a decision that is up to all the parties involved in the allocation. And of course, how they decide is another research question,” Wilson says.