Skip to content Skip to navigation

The RAIN seminar is held on Wednesdays from 12:00-1:00pm in Y2E2 101 . And yes, lunch is provided!

RAIN schedule for 2018-2019

Date Speaker Topic
October 24
Omer Reingold A Complexity-Theoretic Perspective on Algorithmic Fairness
December 5
Yiling Chen Surrogate Scoring Rules and A Dominant Truth Serum
January 16
Vasilis Gkatzelis TBA
January 23
Ehud Kalai A Rudimentary Index of Strategic Stability: Deterring Defectors, Sustaining Loyalists and Forming Equilibrium
January 30
Justin Grimmer TBA
February 20
Rediet Abebe TBA

Google Calendar for RAIN


Previous year's talks

Archived talks can be accessed here.

Talk Abstracts

A Complexity-Theoretic Perspective on Algorithmic Fairness
Omer Reingold, Stanford

Abstract: Machine learning and data analysis have enjoyed tremendous success in a broad range of domains. These advances hold the promise of great benefits to individuals, organizations, and society as a whole. Undeniably, algorithms are informing decisions that reach ever more deeply into our lives, from news article recommendations to criminal sentencing decisions to healthcare diagnostics. This progress, however, raises (and is impeded by) a host of concerns regarding the societal impact of computation. A prominent concern is that left to their own devices, algorithms will propagate - even amplify - existing biases. Often, definitions of fairness are group-based, typically requiring that a given statistic be equal across a few demographic groups, socially identified as deserving protection. Other definitions aim at individual-level protections. Broad-stroke statistical guarantees tend to be much easier to satisfy (information and complexity theoretically) but tend to be much weaker in the protections they provide. We will discuss recent research towards bridging the gap between statistical and individual protections. These works provide efficient learning algorithms that ensure protection (according to some fairness notion) to every sub-population within some rich class of sets. Our approach to defining the class of sets to protect is complexity-theoretic. We aim at obtaining the strongest fairness guarantees that can be obtained with the available computational resources. Based on joint works with Ursula Hebert-Johnson, Michael P. Kim and Guy Rothblum.




Surrogate Scoring Rules and A Dominant Truth Seru
Yiling Chen, Harvard

Abstract: Strictly proper scoring rules (SPSR) are widely used when designing incentive mechanisms to elicit private information from strategic agents using realized ground truth signals, and they can help quantify the value of elicited information. In this paper, we extend such scoring rules to settings where a mechanism designer does not have access to ground truth. We consider two such settings: (i) a setting when the mechanism designer has access to a noisy proxy version of the ground truth, with known biases; and (ii) the standard peer prediction setting where agents' reports are the only source of information that the mechanism designer has. We introduce surrogate scoring rules (SSR) for the first setting, which use the noisy ground truth to evaluate quality of elicited information. We show that SSR preserves the strict properness of SPSR. Using SSR, we then develop a multi-task scoring mechanism -- called dominant truth serum (DTS) -- to achieve strict properness when the mechanism designer only has access to agents' reports. In comparison to standard equilibrium concepts in peer prediction, we show that DTS can achieve truthfulness in a multi-task dominant strategy. A salient feature of SSR and DTS is that they quantify the quality of information despite lack of ground truth, just as proper scoring rules do for the with verification setting. Our method is verified both theoretically and empirically using data collected from real human participants. This talk is based on joint work with Yang Liu.




A Rudimentary Index of Strategic Stability: Deterring Defectors, Sustaining Loyalists and Forming Equilibrium
Ehud Kalai, Northwestern

Abstract: A rudimentary index explains Nash-equilibrium choices observed in behavioral economics. A rudimentary index explains Nash-equilibrium choices observed in behavioral economics. The index assigns stability levels φ(π) = 0; 1; :::; n, to strategy profiles π of n-person games: Level 0 is assigned to profiles that are not Nash equilibrium, levels 1,2,...n-1 are assigned to Nash equilibria in increasing levels of stability, and level n is assigned to dominant-strategy equilibria. The index measures players' confidence in the optimality of their choices; and expands Nash's question of "whether a strategy profile deters individual defectors" to "what size defector groups does a strategy profile deter".