Thursday, May 05 @ 7:30pm*
Bronfman Auditorium – Wachenheim B11
*Talk open to the public. Private reception to follow for Williams students, faculty and staff.
Friday, May 06 @ 2:35pm*
Computer Science Colloquium – Wege TCL 123
*Williams students, faculty and staff only
Suresh Venkatasubramanian is a professor in computer science and data science, currently on loan to the White House in the Office of Science and Technology Policy. His background is in theoretical computer science, and he’s taken a long and winding path through many areas of data science. For almost the past decade, he’s been interested in algorithmic fairness, and more broadly the impact of automated decision-making systems in society.
The views expressed in these talks are his alone and do not represent any of the institutions he is affiliated with.
“Machine Readable”: The Power and Limits of Algorithms That are Shaping Society
Algorithms have infiltrated our society, imposing their own frame of reference on how we conduct ourselves, how we interact with others, and how we are judged. They’ve turbocharged inequality and biases. They’ve accelerated the balkanization of the landscape of ideas, making it easier and easier to live within suffocatingly homogeneous ideological and cultural bubbles.
Our obsession with technology has brought out the worst in us, while trying to bring out the best. But it’s done a whole lot more. The story of the algorithmic society is not about how the widespread deployment of technology creates distortions in the world. It is about a particular mindset – an algorithmic lens – that has quietly reframed how we think about society itself.
In this talk I’ll describe the elements of this lens — precision, scale, homogeneity, and consistency. I’ll illustrate how many of the problems we encounter with technology come from the distorting effect of this lens. And I’ll also argue (perhaps surprisingly) that the lens still has much to offer, as long as we can understand where it is most effective and where it is not.
On Equity in Access
Algorithmic fairness as a field of research has grown rapidly in the last fifteen years. The technical core of the field has been about fair decision making: how we measure fairness (and unfairness) in decision making, how we can mitigate unfairness, and how we can audit decision-making systems for bias.
But algorithmic fairness is one manifestation of a broader question: what does it mean for (automated) systems to be equitable? In this talk I want to talk about a different kind of equity — of equity in access. We’ve had stark demonstrations over the past few years of the problems that arise when access to resources is distributed unequally: whether it be the ways in which the ability to vote has been restricted, or how access to critical health care is distributed unequally.
Can we quantify disparities in access? I’ll talk about some recent work that attempts to do exactly this, and in the process will reveal a different way to think about one of the oldest problems in data mining — clustering (and its close cousin facility location).
This talk describes work done jointly with Mohsen Abbasi, Sorelle Friedler, Kristian Lum, Carlos Scheidegger, and Calvin Barrett.