Skip to content Skip to navigation

The RAIN seminar is held on Wednesdays between 12-1pm PT this year. You can subscribe to the seminar mailing list by visiting here.

RAIN schedule for Spring 2022

Date Speaker Topic Comment
Apr. 27
Federico Echenique TBD

RAIN schedule for Winter 2022

Date Speaker Topic Comment
Mar. 9
Ron Berman TBD
Mar. 2
Christina Yu TBD
Feb. 23
Omer Tamuz Private Private Information

RAIN schedule for Fall 2021

Date Speaker Topic Comment
Nov. 30
Vijay Vazirani Online Bipartite Matching and Adwords
Tuesday 3 pm, joint with CS Theory
Nov. 3
Douglas Guilbeault How communication networks promote cross-cultural similarities: The case of category formation
Oct. 20
Katrina Ligett Gaming Helps! Learning from Strategic Interactions in Natural Dynamics
Oct. 6
Nihar Shah Two F-words in Peer Review (Fraud and Feedback)

Google Calendar for RAIN

Previous year's talks

Archived talks can be accessed here.

Talk Abstracts

Private Private Information
Omer Tamuz

Abstract: In a private private information structure, agents’ signals contain no information about the signals of their peers. We study how informative such structures can be, and characterize those that are on the Pareto frontier, in the sense that it is impossible to give more information to any agent without violating privacy. In our main application, we show how to optimally disclose information about an unknown state under the constraint of not revealing anything about a correlated variable that contains sensitive information.
Bio: Omer Tamuz is a professor of economics and mathematics at Caltech.

Online Bipartite Matching and Adwords
Vijay V. Vazirani

Abstract: Over the last three decades, the online bipartite matching (OBM) problem has emerged as a central problem in the area of Online Algorithms. Perhaps even more important is its role in the area of Matching-Based Market Design. The resurgence of this area, with the revolutions of the Internet and mobile computing, has opened up novel, path- breaking applications, and OBM has emerged as its paradigmatic algorithmic problem. In a 1990 joint paper with Richard Karp and Umesh Vazirani, we gave an optimal algorithm, called RANKING, for OBM, achieving a competitive ratio of (1 – 1/e); however, its analysis was difficult to comprehend. Over the years, several researchers simplified the analysis. We will start by presenting a “textbook quality” proof of RANKING. Its simplicity raises the possibility of extending RANKING all the way to a generalization of OBM called the adwords problem. This problem is both notoriously difficult and very significant, the latter because of its role in the AdWords marketplace of Google. We will show how far this endeavor has gone and what remains. We will also provide a broad overview of the area of Matching-Based Market Design and pinpoint the role of OBM. Based on:
Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his PhD from the University of California, Berkeley in 1983. He is currently a Distinguished Professor at the University of California, Irvine. Vazirani has made fundamental contributions to several areas of the theory of algorithms, including algorithmic matching theory, approximation algorithms and algorithmic game theory, as well as to complexity theory, in which he established, with Les Valiant, the hardness of unique solution instances of NP-complete problems. Over the last four years, he has been working on algorithms for matching markets. He is one of the founders of algorithmic game theory. In 2001 he published Approximation Algorithms, which is widely regarded as the definitive book on the topic. In 2007, he published the co-edited book Algorithmic Game Theory. Another co-edited book, Online and Matching-Based Market Design, will be published by Cambridge University Press in early 2022; see its flyer:

How communication networks promote cross-cultural similarities: The case of category formation
Douglas Guilbeault

Abstract: Individuals vary widely in how they categorize novel phenomena. This individual variation has led canonical theories in cognitive and social science to suggest that communication in large social networks leads populations toward divergent category systems. Yet, anthropological data indicates that large, independent societies consistently arrive at similar categories across a range of topics. How is it possible for diverse populations, consisting of individuals with significant variation in how they view the world, to independently construct similar categories? Through a series of online experiments, I show how large communication networks within cultures can promote the formation of similar categories across cultures. I use the online “Grouping Game” to observe how people construct categories in both small and large populations when presented with the same novel images. I replicate this design for English-speaking subjects in the U.S. and Mandarin-speaking subjects in China. In both cultures, solitary individuals and small social groups produced highly divergent category systems. Yet, large social groups separately and consistently arrived at highly similar categories both within and across cultures. These findings are accurately predicted by a simple mathematical model of critical mass dynamics. Altogether, I show how large communication networks can filter lexical diversity among individuals to produce replicable society-level patterns, yielding unexpected implications for cultural evolution.
Bio: Douglas Guilbeault is an assistant professor in the management of organizations at the Berkeley Haas School of Business. His research focuses on how communication networks underlie the production and diffusion of cultural content, such as linguistic categories and social norms. His studies have been published in a number of top journals, including Nature Communications, The Proceedings of the National Academy of the Sciences, and Management Science. As well, he has received top research awards from the International Conference on Computational Social Science, the Cognitive Science Society, and Facebook. Douglas teaches People Analytics at Haas, and he has served as a People Analytics consultant for a variety of organizations.

Gaming Helps! Learning from Strategic Interactions in Natural Dynamics
Katrina Ligett

Abstract: Those who are being classified are sometimes aware of the classifiers that are being used on them, and may have incentive to change their behavior to try to improve the label they receive. The attitude towards such strategic behavior, both in practice and in theoretical treatment, has generally been quite negative, and this is one of the reasons that the internal workings of high-stakes classifiers are often shrouded in secrecy. However, intuitively, agents who strategically change their behavior in response to incentives set by classifiers may actually be doing everyone a favor: they are helping teach the classifier whether the variables that the classifier depends on are truly meaningful for the task at hand---that is, features that, when changed, affect the true label (as opposed to non-meaningful features that have no effect). Over time, this could push the system to develop better classifiers, and could push individuals to invest more in meaningful variables. We study this issue in an online regression setting. Joint work with Yahav Bechavod, Zhiwei Steven Wu, and Juba Ziani. Work appeared at AISTATS'21.
Bio: Katrina Ligett is an Associate Professor of Computer Science at the Hebrew University of Jerusalem, where she is also the Head of the Program on Internet & Society. Her research interests include data privacy, algorithmic fairness, machine learning theory, and algorithmic game theory. She received her PhD in computer science in 2009, from Carnegie Mellon University, and did a postdoc at Cornell University. Before joining the Hebrew University, Katrina was on the faculty in computer science and economics at Caltech.

Two F-words in Peer Review (Fraud and Feedback)
Nihar Shah

Abstract: In this talk, we present two major challenges in peer review, propose solutions with guarantees, and discuss important open problems. (1) Fraud: There have been several recent discoveries of fraud in peer review: A group of participants form a coalition, get assigned each other's papers by manipulating the system, and then accept each others' papers. We present an algorithm which mitigates such fraud by randomizing reviewer assignments, and does not rely on assumptions about the malicious behavior. The algorithm yields an optimal-quality assignment subject to the randomization constraints, and we will discuss experiments characterizing this tradeoff. (2) Feedback: Real-world systems rely on feedback about their performance for their continual improvement. A useful means of obtaining feedback about the peer-review process is to ask authors' opinions. However, author opinions are significantly biased by whether their paper was accepted. We formulate this problem and present algorithms to debias such feedback. Our work relies on the key observation that the direction of this bias is known: the program chairs know which authors' papers were accepted. An overview of research on peer review is available here
Bio: Nihar B. Shah is an Assistant Professor in the Machine Learning and Computer Science departments at Carnegie Mellon University (CMU). His research interests span statistics, machine learning, information theory, and game theory, recently focusing on improving the peer-review process by designing computational methods, mathematical guarantees, experimental evaluations and deployments. He is a recipient of a Google Research Scholar Award 2021, an NSF CAREER Award 2020-25, the 2017 David J. Sakrison memorial prize from EECS Berkeley for a "truly outstanding and innovative PhD thesis", the Microsoft Research PhD Fellowship 2014-16, the Berkeley Fellowship 2011-13, the IEEE Data Storage Best Paper and Best Student Paper Awards for the years 2011/2012, and the SVC Aiya Medal 2010, and has supervised the Best Student Paper at AAMAS 2019.