Monday, March 3, 2025

The Challenges of Making Fair Machines

Math and Science News from Quanta Magazine
View this email in your browser
Each week Quanta Magazine explains one of the most important ideas driving modern research. This week, computer science staff writer Ben Brubaker explains how theoretical computer scientists bring mathematical rigor to questions about the social impact of algorithms.

 

The Challenges of Making Fair Machines

By BEN BRUBAKER

These days, algorithms are everywhere. They dictate the ads we see online, screen job applications, and set the prices of goods and services, among countless other tasks. That makes it all the more important to understand the ways they can go wrong — for instance, by leaking sensitive personal data or discriminating among job applicants. Can mathematics provide insights into how to stop algorithms from misbehaving?
 
This is the challenge facing researchers who use the tools of theoretical computer science to study algorithmic privacy and fairness, and it's no easy task. As I discussed in the September 9, 2024 edition of Fundamentals, theoretical computer scientists traditionally study objective properties of algorithms — like speed and memory usage — that are easy to quantify, at least in principle. But it's not obvious how to translate more subjective notions of privacy and fairness into precise mathematical terms.
 
Take privacy, for example. When organizations analyze data sets that contain sensitive personal information, they often take steps to anonymize the records. But it's surprisingly easy to thwart such measures by cross-referencing supposedly anonymous data with publicly available information. "Human intuition about what is private is not especially good," as the computer scientist Frank McSherry put it in 2012.
 
Fairness is an even slipperier concept, in part because it's inherently multifaceted. There are many ways to divide any population into groups, and it's not always possible to find one algorithm that looks fair from every vantage point. Despite these challenges, thinking mathematically about privacy and fairness has its advantages. It offers a way to pinpoint how decisions in algorithm design impact the real world.
 
Ultimately, computer science research can't tell us how to weigh tradeoffs between openness and privacy, or which definition of fairness we should prefer. But it can help us better understand the consequences of our choices. In a world awash in algorithms, that's more important than ever.
 
What's New and Noteworthy

In a landmark 2005 paper, the computer scientist Cynthia Dwork and two colleagues introduced a notion called "differential privacy," which has allowed researchers to precisely quantify the tradeoff between anonymity and accuracy for data analysis algorithms. Erica Klarreich unpacked how it works in one of the first stories published in Quanta. Since then, Dwork has turned her attention to fairness in classification algorithms, such as the algorithms that decide whether to accept or reject loan applications. In a 2016 Q&A with Quanta's Kevin Hartnett, Dwork discussed why the multitude of competing considerations makes fairness harder to study than privacy, and what motivates her to tackle such a daunting problem.
 
Fairness is also a central concern when you want to divide resources among parties with competing claims. Computer scientists often study this problem using a playful scenario: How can you fairly divide a cake? With only two people, the task is easy. If one person cuts and the other chooses which slice to take, there's no way for either participant to cheat. But it wasn't until 2016 that researchers figured out an algorithm that works for any number of participants. Klarreich covered that breakthrough as well, and followed it up with an explainer exploring tensions between different notions of fairness in the cake-cutting problem. Even in seemingly simple situations, fairness can get messy.
 
Sometimes, research on fairness and privacy can reveal connections to traditional topics in theoretical computer science. Recently, Dwork and other researchers discovered that algorithmic fairness is related to what makes certain computational problems hard to solve. Lakshmi Chandrasekaran reported on that surprising result last year. It's a nice illustration that scientific progress doesn't always proceed from theory to real-world applications — sometimes, it goes the other way.

AROUND THE WEB
In a collaboration with the U.S. Census Bureau, the minutephysics YouTube channel produced an engaging video explainer on differential privacy.
Three computer scientists wrote a paper arguing that the traditional approach to differential privacy isn't enough to safeguard sensitive information in the age of AI models trained on public data from around the internet.
In a provocative Substack post, the machine learning researcher Ben Recht argued that a mathematical approach to algorithmic fairness is misguided.
Follow Quanta
Facebook
Twitter
YouTube
Instagram
Simons Foundation

160 5th Avenue, 7th Floor
New York, NY 10010

Copyright © 2025 Quanta Magazine, an editorially independent division of Simons Foundation

Scientist Pankaj

Day In Review: University High Knows the Answers at NASA JPL Regional Science Bowl

In a fast-paced competition, students showcased their knowledge across a wide range of science and math topics.  Missions __ News __ Gal...