Skip to content Skip to navigation

The RAIN seminar is held on Wednesdays between 12-1pm PT this year. You can subscribe to the seminar mailing list by visiting here.

RAIN schedule for Spring 2022

Date Speaker Topic Comment
Apr. 27
Federico Echenique TBD
Apr. 20
Ravi Jagadeesan Matching and Prices
Apr. 6
Constantinos Daskalakis What does it take to be a good fisherman?
Mar. 30
Christina Yu Simple yet Efficient Graph Agnostic Estimators for Network Causal Inference - from Linear to Low Degree Polynomial Models

RAIN schedule for Winter 2022

Date Speaker Topic Comment
Mar. 9
Ron Berman Naive analytics equilibrium
Feb. 23
Omer Tamuz Private Private Information

RAIN schedule for Fall 2021

Date Speaker Topic Comment
Nov. 30
Vijay Vazirani Online Bipartite Matching and Adwords
Tuesday 3 pm, joint with CS Theory
Nov. 3
Douglas Guilbeault How communication networks promote cross-cultural similarities: The case of category formation
Oct. 20
Katrina Ligett Gaming Helps! Learning from Strategic Interactions in Natural Dynamics
Oct. 6
Nihar Shah Two F-words in Peer Review (Fraud and Feedback)

Google Calendar for RAIN


Previous year's talks

Archived talks can be accessed here.

Talk Abstracts

Matching and Prices
Ravi Jagadeesan

Abstract: Indivisibilities and budget constraints are pervasive features of many matching markets. But when taken together, these features typically cause failures of gross substitutability—a standard condition on preferences imposed in most matching models. To accommodate budget constraints and other income effects, we analyze matching markets under a weaker condition: net substitutability. Although competitive equilibria do not generally exist in our setting, we show that stable outcomes always exist and are efficient. However, standard auctions and matching procedures, such as the Deferred Acceptance algorithm and the Cumulative Offer process, do not generally yield stable outcomes. We illustrate how the flexibility of prices is critical for our results. We also discuss how budget constraints and other income effects affect classic properties of stable outcomes. Joint work with Alex Teytelboym
Bio: Ravi Jagadeesan is a Postdoctoral Fellow at Stanford. Starting in July, he will be an Assistant Professor in the Department of Economics at Stanford. He completed his PhD in Business Economics at Harvard in Spring 2020. Before that, he graduated from Harvard with an A.B. in Mathematics and an A.M. in Statistics in Spring 2018.




What does it take to be a good fisherman?
Costis Daskalakis

Abstract: A reasonable approach to figure this out is to collect training data comprising features of fishermen and their daily catch, and then learn a model mapping fishermen features to the size of their catch. Reasonable as this approach may sound, it will most likely result in a biased model. The reason for this bias is that the training data will miss all those individuals who were not good enough at fishing and decided to become hunters (or do something else) instead. Such self-selection bias is pervasive. From understanding what it takes to be a good college student or company employee to learning from expert demonstrations and understanding strategic behavior in markets, data available for learning statistical models are the results of strategic decisions that have already operated on and filtered out some of the relevant data. I will discuss recent progress on some classical econometric challenges revolving around estimating linear models under self-selection bias, and identification of non-parametric auction models, and present several open directions for future investigation. This talk is based on joint works with Yeshwanth Cherapanamjeri, Andrew Ilyas, Manolis Zampetakis.
Bio: Constantinos (aka "Costis" with an accent on 'i') Daskalakis is a Professor of Electrical Engineering and Computer Science at MIT. He holds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens, and a PhD in Electrical Engineering and Computer Science from UC Berkeley. He works on Computation Theory and its interface with Game Theory, Economics, Probability Theory, Machine Learning and Statistics. He has resolved long-standing open problems about the computational complexity of Nash equilibrium, and the mathematical structure and computational complexity of multi-item auctions. His current work focuses on high-dimensional statistics and learning from biased, dependent, or strategic data. He has been honored with the ACM Doctoral Dissertation Award, the Kalai Prize from the Game Theory Society, the Sloan Fellowship in Computer Science, the SIAM Outstanding Paper Prize, the Microsoft Research Faculty Fellowship, the Simons Investigator Award, the Rolf Nevanlinna Prize from the International Mathematical Union, the ACM Grace Murray Hopper Award, and the Bodossaki Foundation Distinguished Young Scientists Award.




Naive analytics equilibrium
Ron Berman

Abstract: We study interactions with uncertainty about demand sensitivity. In our solution concept (1) firms choose seemingly-optimal strategies given the level of sophistication of their data analytics, and (2) the levels of sophistication form best responses to one another. Under the ensuing equilibrium firms underestimate price elasticities and overestimate advertising effectiveness, as observed empirically. The misestimates cause firms to set prices too high and to over-advertise. In games with strategic complements (substitutes), profits Pareto dominate (are dominated by) those of the Nash equilibrium. Applying the model to team production games explains the prevalence of overconfidence among entrepreneurs and salespeople.
Bio: Ron Berman is an assistant professor of marketing at Wharton. He focuses his research on digital marketing and marketing analytics. Recently Ron has been investigating how firms assess and optimize marketing effectiveness through experiments, how curation algorithms may create filter-bubbles on social media, and how descriptive analytics affects online firm performance. His research has been published in top marketing journals such as Marketing Science and the Journal of Marketing Research and he is a member of the editorial boards of the Journal of Marketing Research and Quantitative Marketing and Economics. Ron disseminates his research by teaching Digital Marketing courses in undergrad, MBA and Executive Education programs, and is often invited by market leading firms including Google, Facebook, and Wayfair to share and discuss his research. Ron’s experience includes early-stage venture capital investing at Viola Ventures (formerly Carmel Ventures) and developing software for the Israeli Defense Forces (IDF). Ron is an active advisor and investor, involved with startups such as Desti (travel planning, acquired by Nokia), Zimperium (cyber security), Abakus (advertising attribution, acquired by SAP), Peerspace (P2P venue marketplace), Netlify (serverless website deployment), Stackbit (content management), cauzal.ai (conversion optimization) and Honeycomb Insurance (commercial real-estate insurance). Ron holds a PhD and MSc in Business Administration (Marketing) from the University of California, Berkeley, an MBA and MSc in Computer Science from Tel-Aviv University, and a BSc in Computer Science, Physics and Mathematics from the Hebrew University in Jerusalem.




Simple yet Efficient Graph Agnostic Estimators for Network Causal Inference - from Linear to Low Degree Polynomial Models
Christina Yu

Abstract: Randomized experiments are widely used to estimate causal effects of proposed ``treatments'' in domains spanning across physical sciences, social sciences, medicine, and technology industries. However, classical approaches to experimental design rely on critical independence assumptions that are violated when the outcome of an individual a may be affected by the treatment of another individual b, referred to as network interference. Under such network interference, naively using popular estimators and randomized experimental designs can result in significant bias and loss of efficiency. We consider heterogeneous linear and polynomial potential outcomes models for network interference, under which we propose simple estimators for the total treatment effect that output unbiased estimates with low variance under simple randomized designs. Our solution and statistical guarantees do not rely on restrictive network properties, allowing for highly connected graph structures. When the network is completely unknown, we provide a simple unbiased and efficient estimator under a staggered rollout randomized design, showing that the flexibility from experimentation implemented over time can remove any requirement of network knowledge. We believe our results are poised to impact current randomized experimentation strategies due to its simplicity and ease of implementation, wide applicability across different network structures, and its statistical guarantees under a flexible hierarchy of network interference models.
Bio: Christina Lee Yu is an Assistant Professor at Cornell University in the School of Operations Research and Information Engineering. Prior to Cornell, she was a postdoc at Microsoft Research New England. She received her PhD in 2017 and MS in 2013 in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in the Laboratory for Information and Decision Systems. She received her BS in Computer Science from California Institute of Technology in 2011. She received honorable mention for the 2018 INFORMS Dantzig Dissertation Award. She is a recipient of the 2021 Intel Rising Stars Award and a JPMorgan Faculty Research Award. Her research interests include algorithm design and analysis, high dimensional statistics, inference over networks, sequential decision making under uncertainty, online learning, and network causal inference.




Private Private Information
Omer Tamuz

Abstract: In a private private information structure, agents’ signals contain no information about the signals of their peers. We study how informative such structures can be, and characterize those that are on the Pareto frontier, in the sense that it is impossible to give more information to any agent without violating privacy. In our main application, we show how to optimally disclose information about an unknown state under the constraint of not revealing anything about a correlated variable that contains sensitive information.
Bio: Omer Tamuz is a professor of economics and mathematics at Caltech.




Online Bipartite Matching and Adwords
Vijay V. Vazirani

Abstract: Over the last three decades, the online bipartite matching (OBM) problem has emerged as a central problem in the area of Online Algorithms. Perhaps even more important is its role in the area of Matching-Based Market Design. The resurgence of this area, with the revolutions of the Internet and mobile computing, has opened up novel, path- breaking applications, and OBM has emerged as its paradigmatic algorithmic problem. In a 1990 joint paper with Richard Karp and Umesh Vazirani, we gave an optimal algorithm, called RANKING, for OBM, achieving a competitive ratio of (1 – 1/e); however, its analysis was difficult to comprehend. Over the years, several researchers simplified the analysis. We will start by presenting a “textbook quality” proof of RANKING. Its simplicity raises the possibility of extending RANKING all the way to a generalization of OBM called the adwords problem. This problem is both notoriously difficult and very significant, the latter because of its role in the AdWords marketplace of Google. We will show how far this endeavor has gone and what remains. We will also provide a broad overview of the area of Matching-Based Market Design and pinpoint the role of OBM. Based on: https://arxiv.org/pdf/2107.10777.pdf
Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his PhD from the University of California, Berkeley in 1983. He is currently a Distinguished Professor at the University of California, Irvine. Vazirani has made fundamental contributions to several areas of the theory of algorithms, including algorithmic matching theory, approximation algorithms and algorithmic game theory, as well as to complexity theory, in which he established, with Les Valiant, the hardness of unique solution instances of NP-complete problems. Over the last four years, he has been working on algorithms for matching markets. He is one of the founders of algorithmic game theory. In 2001 he published Approximation Algorithms, which is widely regarded as the definitive book on the topic. In 2007, he published the co-edited book Algorithmic Game Theory. Another co-edited book, Online and Matching-Based Market Design, will be published by Cambridge University Press in early 2022; see its flyer: https://www.ics.uci.edu/~vazirani/flyer.pdf




How communication networks promote cross-cultural similarities: The case of category formation
Douglas Guilbeault

Abstract: Individuals vary widely in how they categorize novel phenomena. This individual variation has led canonical theories in cognitive and social science to suggest that communication in large social networks leads populations toward divergent category systems. Yet, anthropological data indicates that large, independent societies consistently arrive at similar categories across a range of topics. How is it possible for diverse populations, consisting of individuals with significant variation in how they view the world, to independently construct similar categories? Through a series of online experiments, I show how large communication networks within cultures can promote the formation of similar categories across cultures. I use the online “Grouping Game” to observe how people construct categories in both small and large populations when presented with the same novel images. I replicate this design for English-speaking subjects in the U.S. and Mandarin-speaking subjects in China. In both cultures, solitary individuals and small social groups produced highly divergent category systems. Yet, large social groups separately and consistently arrived at highly similar categories both within and across cultures. These findings are accurately predicted by a simple mathematical model of critical mass dynamics. Altogether, I show how large communication networks can filter lexical diversity among individuals to produce replicable society-level patterns, yielding unexpected implications for cultural evolution.
Bio: Douglas Guilbeault is an assistant professor in the management of organizations at the Berkeley Haas School of Business. His research focuses on how communication networks underlie the production and diffusion of cultural content, such as linguistic categories and social norms. His studies have been published in a number of top journals, including Nature Communications, The Proceedings of the National Academy of the Sciences, and Management Science. As well, he has received top research awards from the International Conference on Computational Social Science, the Cognitive Science Society, and Facebook. Douglas teaches People Analytics at Haas, and he has served as a People Analytics consultant for a variety of organizations.




Gaming Helps! Learning from Strategic Interactions in Natural Dynamics
Katrina Ligett

Abstract: Those who are being classified are sometimes aware of the classifiers that are being used on them, and may have incentive to change their behavior to try to improve the label they receive. The attitude towards such strategic behavior, both in practice and in theoretical treatment, has generally been quite negative, and this is one of the reasons that the internal workings of high-stakes classifiers are often shrouded in secrecy. However, intuitively, agents who strategically change their behavior in response to incentives set by classifiers may actually be doing everyone a favor: they are helping teach the classifier whether the variables that the classifier depends on are truly meaningful for the task at hand---that is, features that, when changed, affect the true label (as opposed to non-meaningful features that have no effect). Over time, this could push the system to develop better classifiers, and could push individuals to invest more in meaningful variables. We study this issue in an online regression setting. Joint work with Yahav Bechavod, Zhiwei Steven Wu, and Juba Ziani. Work appeared at AISTATS'21.
Bio: Katrina Ligett is an Associate Professor of Computer Science at the Hebrew University of Jerusalem, where she is also the Head of the Program on Internet & Society. Her research interests include data privacy, algorithmic fairness, machine learning theory, and algorithmic game theory. She received her PhD in computer science in 2009, from Carnegie Mellon University, and did a postdoc at Cornell University. Before joining the Hebrew University, Katrina was on the faculty in computer science and economics at Caltech.




Two F-words in Peer Review (Fraud and Feedback)
Nihar Shah

Abstract: In this talk, we present two major challenges in peer review, propose solutions with guarantees, and discuss important open problems. (1) Fraud: There have been several recent discoveries of fraud in peer review: A group of participants form a coalition, get assigned each other's papers by manipulating the system, and then accept each others' papers. We present an algorithm which mitigates such fraud by randomizing reviewer assignments, and does not rely on assumptions about the malicious behavior. The algorithm yields an optimal-quality assignment subject to the randomization constraints, and we will discuss experiments characterizing this tradeoff. (2) Feedback: Real-world systems rely on feedback about their performance for their continual improvement. A useful means of obtaining feedback about the peer-review process is to ask authors' opinions. However, author opinions are significantly biased by whether their paper was accepted. We formulate this problem and present algorithms to debias such feedback. Our work relies on the key observation that the direction of this bias is known: the program chairs know which authors' papers were accepted. An overview of research on peer review is available here http://bit.ly/PeerReviewOverview
Bio: Nihar B. Shah is an Assistant Professor in the Machine Learning and Computer Science departments at Carnegie Mellon University (CMU). His research interests span statistics, machine learning, information theory, and game theory, recently focusing on improving the peer-review process by designing computational methods, mathematical guarantees, experimental evaluations and deployments. He is a recipient of a Google Research Scholar Award 2021, an NSF CAREER Award 2020-25, the 2017 David J. Sakrison memorial prize from EECS Berkeley for a "truly outstanding and innovative PhD thesis", the Microsoft Research PhD Fellowship 2014-16, the Berkeley Fellowship 2011-13, the IEEE Data Storage Best Paper and Best Student Paper Awards for the years 2011/2012, and the SVC Aiya Medal 2010, and has supervised the Best Student Paper at AAMAS 2019.