Selective risk could improve AI fairness and accuracy

A new technique called monotonic selective risk could be deployed to reduce the error rate of underrepresented groups in AI models.
As AI is used in higher-stakes decision-making, the emphasis on making decisions as fair and accurate as possible, without any inherent bias, has become a pursuit of several academic bodies and of companies.
Researchers from MIT and the MIT-IBM Watson AI lab have published a new paper, aiming to admonish the use of selective regression in certain scenarios, as this technique can reduce a model’s overall performance for underrepresented groups. in a dataset.
SEE ALSO: Tackling the Unknowns of Artificial Intelligence for Success
These underrepresented groups tend to be women and people of color, and this failure to account for them has led to several reports that AI is racist and sexist. In one story, an AI used for risk assessment incorrectly flagged black prisoners twice as many as white prisoners. In another, images of men without any context were identified as doctors and housewives at a higher rate than women.
With selective regression, for each input, an AI model is able to make two choices: predict or abstain. The model will only make a prediction if it is confident in the decision, which in several tests has led to better model performance by eliminating inputs that cannot be properly evaluated.
However, when an entry is removed, it amplifies the biases that already exist in the dataset. This can lead to further inaccuracies in underrepresented groups once the AI model is deployed in real life, as it cannot remove or reject underrepresented groups as it did during development.
“Ultimately it’s about being smarter about the samples you hand over to a human to process. Rather than just minimizing some general error rate for the model, we want to ensuring that the error rate between groups is intelligently accounted for,” said MIT lead author Greg Wornell, Sumitomo Professor of Engineering in the Department of Electrical Engineering and Computer Science (EECS).
The MIT researchers introduced a new technique that aims to improve model performance for each subgroup. This technique is called monotonic selective risk, and instead of abstaining, one model includes sensitive attributes such as race and gender, and the other does not. In tandem, the two models make decisions, and the model without the sensitive data is used as a calibration for biases in the data set.
“It was difficult to find the right notion of fairness for this particular problem. But by applying this criterion, monotonic selective risk, we can ensure that the model performance actually improves in all subgroups when you reduce the coverage,” said Abhin Shah, a graduate student at EECS.
When tested with a medical insurance dataset and a crime dataset, the new technique was able to reduce the error rate for underrepresented groups without having a significant impact on the rate of overall performance of the model. The researchers plan to apply the technique to new applications, such as housing prices, student GPA and loan interest rates, to see if it can be calibrated for other tasks.