Stanford RAIN (Research on Algorithms and Incentives in Networks) Seminar Talk Archive

2022-2023

May 31, 2023
Benjamin Brooks
Robust Mechanisms for the Financing of Public Goods
→ Abstract and Bio We propose a novel proportional cost-sharing mechanism for funding public goods with interdependent values: the agents simultaneously submit bids, which are nonnegative numbers; the expenditure on the public good is an increasing and concave function of the sum of the bids; and each agent is responsible for the fraction of the expenditure proportional to their bid. The proportional cost-sharing mechanism provides a non-trivial guarantee for social welfare, regardless of the structure of the agents’ information and the equilibrium that is played, as long as the social value for the public good is sufficiently large. Moreover, this guarantee is shown to be unimprovable in environments where the designer knows a lower bound on the social value. The guarantee converges to the entire efficient surplus when the social value grows large. When there are two agents, our model can be reinterpreted as one of bilateral trade, and the proportional cost-sharing is reinterpreted as proportional pricing.

Bio: Benjamin Brooks is an Associate Professor in the Department of Economics at the University of Chicago. He studies various aspects of economic theory, including games of incomplete information, auction theory and mechanism design, and repeated games.
Oct. 12, 2022
Diyi Yang
More Civility and Positivity for Socially Responsible Language Understanding
→ Abstract and Bio Natural language processing (NLP) has had increasing success and produced extensive industrial applications. Despite being sufficient to enable these applications, current NLP systems often ignore the social part of language, e.g., who says it, in what context, for what goals, which severely limits the functionality of these applications and the growth of the field. Our research focuses on the social part of language, towards building more socially responsible language technologies. In this talk, I will take a closer look at social factors in language and share two recent works for promoting more civility and positivity in language use. The first one studies hate speech by introducing a benchmark corpus on implicit hate speech and computational models to detect and explain latent hatred in language. The second examines positive reframing by neutralizing a negative point of view and generating a more positive perspective without contradicting the original meaning.

Bio: Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research interests are computational social science and natural language processing. Her research goal is to understand the social aspects of language and to build socially aware NLP systems to better support human-human and human-computer interaction. Her work has received multiple paper awards or nominations at ACL, ICWSM, EMNLP, SIGCHI, and CSCW. She is a recipient of Forbes 30 under 30 in Science (2020), IEEE “AI 10 to Watch” (2020), the Intel Rising Star Faculty Award (2021), Microsoft Research Faculty Fellowship (2021), and NSF CAREER Award (2022).
Oct. 26, 2022
Jamie Morgenstern
Shifts in Distributions and Preferences in Response to Learning
→ Abstract and Bio Prediction systems face exogenous and endogenous distribution shift -- the world constantly changes, and the predictions the system makes change the environment in which it operates. For example, a music recommender observes exogeneous changes in the user distribution as different communities have increased access to high speed internet. If users under the age of 18 enjoy their recommendations, the proportion of the user base comprised of those under 18 may endogeneously increase. Most of the study of endogenous shifts has focused on the single decision-maker setting, where there is one learner that users either choose to use or not. In this talk, I'll describe several settings where user preferences may cause changes in distributions over the life of an ML system, and how these changes will affect the long-term performance of such systems. Joint work with Sarah Dean, Mihaela Curmei, Maryam Fazhel and Lillian Ratliff.

Bio: Jamie Morgenstern is an assistant professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She was previously an assistant professor in the School of Computer Science at Georgia Tech. Prior to starting as faculty, she was fortunate to be hosted by Michael Kearns, Aaron Roth, and Rakesh Vohra as a Warren Center fellow at the University of Pennsylvania. She completed her PhD working with Avrim Blum at Carnegie Mellon University. She studies the social impact of machine learning and the impact of social behavior on ML's guarantees. How should machine learning be made robust to behavior of the people generating training or test data for it? How should ensure that the models we design do not exacerbate inequalities already present in society?
Nov. 9, 2022
Shahar Dobzinski
On the Hardness of Dominant Strategy Mechanism Design
→ Abstract and Bio We study the communication complexity of dominant strategy implementations of combinatorial auctions. We start with two domains that are generally considered easy: multi-unit auctions with decreasing marginal values and combinatorial auctions with gross substitutes valuations. For both domains we have fast algorithms that find the welfare-maximizing allocation with communication complexity that is poly-logarithmic in the input size. This immediately implies that welfare maximization can be achieved in an ex-post equilibrium with no significant communication cost, by using VCG payments. In contrast, we show that in both domains the communication complexity of any dominant strategy implementation that achieves the optimal welfare is polynomial in the input size. We then study the approximation ratios achievable by dominant strategy mechanisms. For combinatorial auctions with general valuations, we show that no dominant-strategy mechanism achieves an approximation ratio of m^(1−eps), where m is the number of items. In contrast, a randomized dominant strategy mechanism that achieves an O(sqrt m) approximation. This proves the first gap between computationally efficient deterministic dominant strategy mechanisms and randomized ones. Joint work with Shiri Ron and Jan Vondrak.

Bio: Shahar Dobzinski is a faculty member of Weizmann's applied math and computer science department. His general research area is algorithmic game theory, an area on the intersection of computer science, game theory, and economics. Specifically, he usually studies the theory of computer science aspects of problems in algorithmic mechanism design. He is also interested in other related areas of theoretical computer science, such as approximation algorithms.
Nov. 16, 2022
Nynke Niezink
Networks in 3D: Inference for Three-way Social Networks
→ Abstract and Bio Most statistical methods for social network analysis are developed for social structures that consist of directed or undirected ties between two actors. Yet, in many social contexts, relations inherently involve three actors. For example, different people are known to perceive and cognitively represent the networks they are embedded in differently. Understanding differences in perceptions, and how perceptions drive behavior, requires an explicit model of network perception (sender-receiver-peceiver) data. Gossip networks too require a three-way network perspective. In this talk, I will introduce statistical models and inference for static and dynamic three-way network analysis based on incomplete observations.

Bio: Nynke Niezink is an assistant professor in the Department of Statistics and Data Science at Carnegie Mellon University. Her research focuses on developing statistical methodology and software for the social sciences, with an emphasis on social network analysis. Much of her work is inspired by interdisciplinary collaborations. She received her PhD in Sociology, her MSc’s in Applied Mathematics and Social and Behavioral Sciences, and her BSc’s in Mathematics and Pedagogical and Educational Sciences from the University of Groningen. Her work has been supported by the NSF, NIH, and the Russell Sage and Richard King Mellon Foundation. She was granted a Provost Inclusive Teaching Fellowship for her project targeting diversity, equity, and inclusion in Statistics education at CMU.
Dec. 7, 2022
Thodoris Lykouris
Decentralized multi-agent learning in queuing systems
→ Abstract and Bio Learning in multi-agent systems often poses significant challenges due to interference between agents. In particular, unlike classical stochastic systems, the performance of an agent's action is not drawn i.i.d. from some distribution but is directly affected by the (unobserved) actions of the other agents. This is the reason why most collaborative multi-agent learning approaches aim to globally coordinate all agents' actions to evade this interference. In this talk, we focus on bipartite queuing networks, a common model for two-sided platforms, where N agents request service from K servers. Prior decentralized multi-agent learning approaches have the aforementioned 'global coordination' flavor and therefore suffer from significant shortcomings: they are restricted to symmetric systems, have performance that degrades exponentially in the number of servers, require communication through shared randomness and unique identifiers, and are computationally demanding. In contrast, we provide a simple learning algorithm that, when run decentrally by each agent, avoids the shortcomings of 'global coordination' and leads to efficient performance in general asymmetric bipartite queuing networks while also having additional robustness properties. Along the way, we provide the first UCB-based algorithm for the centralized case of the problem, which resolves an open question by Krishnasamy, Sen, Johari, and Shakkottai (NeurIPS'16 / OR'21). The paper on which this talk is based is joint work with Daniel Freund and Wentao Weng and can be found here: https://arxiv.org/abs/2206.03324. A preliminary version appeared at COLT'22 and Wentao was selected as a finalist in the Applied Probability Society student paper competition for this work.

Bio: Thodoris Lykouris is an assistant professor at the MIT Sloan School of Management, affiliated with the Operations Management group and the Operations Research Center. His research focuses on data-driven sequential decision-making and spans across the areas of machine learning, dynamic optimization, and economics. Before joining MIT, he received his Ph.D. from Cornell University and spent two years at Microsoft Research NYC as a postdoctoral researcher. He is the recipient of a Google Ph.D. Fellowship and a Cornell University Fellowship and was selected as a finalist in the Dantzig Dissertation award as well as the George Nicholson and Applied Probability Society student paper competitions.
Jan. 11, 2023
Tina Eliassi-Rad
The Why, How, and When of Representations for Complex Systems, and their Implications for Machine Learning
→ Abstract and Bio The theme of the 2021 Nobel Prize in Physics was the study of complex systems. At the most basic level, complex systems consist of units and their interactions. In this talk, I will describe each step of a data analysis pipeline suitable for the study of complex systems: from the system dependencies that can manifest themselves in different flavors (temporal, subset, and spatial) to the common mathematical representations (such as graphs, simplicial complexes, and hypergraphs), their underlying assumptions, and the dependencies they encode. I will discuss the mathematical relationships between representations and explain how information can be lost (or imputed) when we convert data from one representation to another. I will use examples to highlight the importance of dependencies and careful choice of representations and algorithms when studying complex systems. The main message of the talk is that there is no perfect way to analyze a complex system, and that modeling decisions made when examining a data set from one system are not necessarily transferable to another system, or even to another data set from the same system. Yet, I see many studies apply certain pipelines for seemingly no other reason than because they are common in a particular field. Instead, I recommend evaluating and studying each new complex system and dataset individually and questioning each assumption and modeling decision. This talk is based on the following paper: Leo Torres, Ann Sizemore Blevins, Danielle S. Bassett, Tina Eliassi-Rad: The Why, How, and When of Representations for Complex Systems. SIAM Review 63(3): 435-485 (2021), https://doi.org/10.1137/20M1355896. Time permitting, I will describe work on measuring the distance between two graphs by relying on the theory of the length spectrum function from algebraic topology and its relationship to the non-backtracking cycles of a graph. This work is joint with Leo Torres and Pablo Suarez-Serrato, and was published in the Journal of Applied Network Science in June 2019 (http://eliassi.org/papers/appliednetsci19_nbd.pdf).

Bio: Tina Eliassi-Rad is a professor of computer science at Northeastern University. She is also a core faculty member at Northeastern's Network Science Institute and the Institute for Experiential AI. In addition, she is an external faculty member at the Santa Fe Institute and the Vermont Complex Systems Center. Prior to joining Northeastern, Tina was an Associate Professor of Computer Science at Rutgers University; and before that she was a member of technical staff and principal investigator at Lawrence Livermore National Laboratory. Tina earned her Ph.D. in Computer Sciences (with a minor in Mathematical Statistics) at the University of Wisconsin-Madison. Her research is at the intersection of data mining, machine learning, and network science. She has over 100 peer-reviewed publications (including a few best paper and best paper runner-up awards); and has given over 250 invited talks and 14 tutorials. Tina's work has been applied to personalized search on the World-Wide Web, statistical indices of large-scale scientific simulation data, fraud detection, mobile ad targeting, cyber situational awareness, drug discovery, democracy and online discourse, and ethics in machine learning. Her algorithms have been incorporated into systems used by governments and industry (e.g., IBM System G Graph Analytics), as well as open-source software (e.g., Stanford Network Analysis Project). In 2017, Tina served as the program co-chair for the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (a.k.a. KDD, which is the premier conference on data mining) and as the program co-chair for the International Conference on Network Science (a.k.a. NetSci, which is the premier conference on network science). In 2020, she served as the program co-chair for the International Conference on Computational Social Science (a.k.a. IC2S2, which is the premier conference on computational social science). Tina received an Outstanding Mentor Award from the U.S. Department of Energy's Office of Science in 2010, became an ISI Foundation Fellow in 2019, was named one of the 100 Brilliant Women in AI Ethics in 2021, and received Northeastern University's Excellence in Research and Creative Activity Award in 2022.
Jan. 25, 2023
Sigal Oren
Algorithmic Game Theory Meets Behavioral Economics
→ Abstract and Bio A recent line of work brings together Algorithmic Game Theory and Behavioral Economics. One approach taken in this line of work is to enrich well-studied settings in Algorithmic Game Theory by considering players that exhibit cognitive biases. In this talk, we will discuss two papers demonstrating this approach in two very different settings. The first paper extends the analysis of classic optimal stopping problems by considering agents that have loss aversion relative to a changing reference point and analyzes their behavior. The second paper relies on the literature on lying in behavioral economics and psychology to tackle one of the foundations of Algorithmic Mechanism Design: truthfulness. Based on joint works with Shahar Dobzinski, Bobby Kleinberg and Jon Kleinberg

Bio: Sigal Oren is an associate professor at the Computer Science department of Ben-Gurion University. She is currently on Sabbatical at Stanford as a Koret fellow. Sigal's research area is algorithmic game theory. She is currently interested in combining behavioral economics and algorithmic game theory.
Feb. 8, 2023
Dan Larremore
Quantifying hierarchy and dynamics in U.S. faculty hiring and retention
→ Abstract and Bio Faculty hiring and retention determine the composition of the U.S. academic workforce and directly shape educational outcomes, career trajectories, the development and spread of ideas, and research priorities. But patterns in faculty hiring and retention are dynamic, reflecting societal and academic priorities, generational turnover, and long-term efforts to diversify the professoriate along gender, racial, and socioeconomic lines. In this talk, we'll analyze, at unprecedented scale and resolution, the academic employment and doctoral education of tenure-track faculty at all PhD-granting U.S. universities over the decade spanning 2011-2020. Focusing on the networks formed when departments hire each other's graduates as faculty, we'll explore the mechanisms shaping these networks as well as the processes of the academic ecosystem that are shaped by them.

Bio: Daniel Larremore is an assistant professor in the Department of Computer Science and the BioFrontiers Institute. His research develops statistical and inferential methods for analyzing large-scale network data, and uses those methods to solve applied problems in diverse domains, including public health and academic labor markets. In particular, his work focuses on generative models for networks, the ongoing evolution of the malaria parasite and the origins of social inequalities in academic hiring and careers. Prior to joining the CU Boulder faculty, he was an Omidyar Fellow at the Santa Fe Institute (2015-2017) and a post-doctoral fellow at the Harvard T.H. Chan School of Public Health (2012-2015). He obtained his PhD in applied mathematics from CU Boulder in 2012, and holds an undergraduate degree from Washington University in St. Louis.
Feb. 15, 2023
Rad Niazadeh
When Matching Meets Batching: Optimal Multi-stage Algorithms and Applications
→ Abstract and Bio In several applications of real-time matching of demand to supply in online marketplaces --- for example matching delivery requests to dispatching centers in Amazon or allocating video-ads to users in YouTube --- the platform allows for some latency (or there is an inherent allowance for latency) in order to batch the demand and improve the efficiency of the resulting matching. Motivated by these scenarios, I investigate the optimal trade-off between batching and inefficiency in the context of designing robust online allocations in this talk. In particular, I consider K-stage variants of the classic vertex weighted bipartite b-matching and AdWords problems in the adversarial setting, where online vertices arrive stage-wise and in K batches—in contrast to online arrival. Our main result for both problems is an optimal (1-(1-1/K)^K)-competitive competitive (fractional) matching algorithm, improving the classic (1 − 1/e) competitive ratio bound known for the online variants of these problems (Mehta et al., 2007; Aggarwal et al., 2011). Our main technique at high-level is developing algorithmic tools to vary the trade-off between “greedyness” and “hedging” of the matching algorithm across stages. We rely on a particular family of convex-programming based matchings that distribute the demand in a specifically balanced way among supply in different stages, while carefully modifying the balancedness of the resulting matching across stages. More precisely, we identify a sequence of polynomials with decreasing degrees to be used as strictly concave regularizers of the maximum weight matching linear program to form these convex programs. At each stage, our fractional multi-stage algorithm returns the corresponding regularized optimal solution as the matching of this stage (by solving the convex program). By providing structural decomposition of the underlying graph using the optimal solutions of these convex programs and recursively connecting the regularizers together, we develop a new multi-stage primal-dual framework to analyze the competitive ratio of this algorithm. We extend our results to integral allocations in the vertex weighted b-matching problem with large budgets, and in the AdWords problem with small bid over budget ratios. I will also briefly mention a recent extension of these results to the multi-stage configuration allocation problem and its applications to video-ads. The talk is based on a series of work with Yiding Feng and Amin Saberi: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3689448 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3613755

Bio: Rad Niazadeh is an Assistant Professor of Operations Management at The University of Chicago Booth School of Business. He is also part of the faculty at Toyota Technological Institute of Chicago (TTIC) by a courtesy appointment. Prior to joining Chicago Booth, he was a visiting researcher at Google Research NYC and a postdoctoral fellow at Stanford University, Computer Science Department. He finished his PhD in Computer Science (minored in Applied Mathematics) at Cornell University. Rad primarily studies the interplay between algorithms, incentives, and learning in real-time operations of online marketplaces and e-commerce platforms. His research aims to build theoretical methodologies and generic frameworks to design faster and economically efficient market algorithms, and also to help with addressing humanitarian needs (such as equity, fairness, and non-discrimination) in operations of governmental agencies and non-profit organizations. https://faculty.chicagobooth.edu/rad-niazadeh
Mar. 15, 2023
Nika Haghtalab
Multi-objective learning: A unifying framework for robustness, fairness, and collaboration
→ Abstract and Bio Social and real-world considerations such as robustness, fairness, social welfare, and multi-agent tradeoffs have given rise to multi-objective learning paradigms. In recent years, these paradigms have been studied by several disconnected communities and under different names, including collaborative learning, distributional robustness, group fairness, and fair federated learning. In this talk, I will highlight the importance of multi-objective learning paradigms in general, introduce technical tools for addressing them from a simple unifying perspective, and discuss how these problems relate to classical and modern consideration in data-driven processes.

Bio: Nika Haghtalab is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She works broadly on the theoretical aspects of machine learning and algorithmic economics. Prof. Haghtalab's work builds theoretical foundations for ensuring both the performance of learning algorithms in presence of everyday economic forces and the integrity of social and economic forces that are born out of the use of machine learning systems. Previously, Prof. Haghtalab was an Assistant Professor in the CS department of Cornell University, in 2019-2020. She received her Ph.D. from the Computer Science Department of Carnegie Mellon University. She is a co-founder of Learning Theory Alliance (LeT-All). Among her honors are the CMU School of Computer Science Dissertation Award, SIGecom Dissertation Honorable Mention, and NeurIPS outstanding paper award.
Apr. 5, 2023
Christopher Harshaw
A New Design-Based Framework for Randomized Experiments and the Riesz Estimator
→ Abstract and Bio Randomized experiments are widely used for investigating causal quantities in a variety of settings, from clinical trials and public policy evaluations to development economics and product development. A standard assumption in designing and analyzing such experiments is that of 'no-interference', which states that a participant’s outcome is only affected by their individual treatment and not by the treatments given to other participants in the experiment. Although standard, this assumption does not hold in many experimental settings, e.g. studies of the effect of vaccines where disease may spread between the participants or cash transfer programs where participants may affect local economies containing other participants. In this talk, I will present a new design-based experimental framework for formulating and investigating causal quantities under complex interference. The framework is expressive in the sense that it allows experimenters to define and investigate new causal estimands based on continuous or discrete treatment assignments under rich and complex interference structures. We present the Riesz Estimator, which is a unified approach to estimation based on insights from functional analysis. Finally, we derive necessary statistical tools (CLT, variance estimators) to construct asymptotically valid confidence intervals for the causal estimand. Joint work with Fredrik Sävje and Yitan Wang.

Bio: Christopher Harshaw is a FODSI postdoctoral fellow hosted jointly between UC Berkeley and MIT. He obtained his PhD in Computer Science from Yale University, where he was advised by Daniel Spielman and Amin Karbasi. His research addresses theoretical aspects of causal inference and data science, particularly algorithmic and statistical problems arising in the design and analysis of randomized experiments.
Apr. 12, 2023
Dhruv Madeka
Deep Reinforcement Learning for Real-World Inventory Management
→ Abstract and Bio We present a Deep Reinforcement Learning approach to solving a periodic review inventory control system with stochastic vendor lead times, lost sales, correlated demand, and price matching. While this dynamic program has historically been considered intractable, we show that several policy learning approaches are competitive with or outperform classical baseline approaches. In order to train these algorithms, we develop novel techniques to convert historical data into a simulator and present a collection of results that motivate this approach. We also present a model-based reinforcement learning procedure (Direct Backprop) to solve the dynamic periodic review inventory control problem by constructing a differentiable simulator. Under a variety of metrics Direct Backprop outperforms model-free RL and newsvendor baselines, in both simulations and real-world deployments.

Bio: Dhruv is a Principal Machine Learning Scientist at Amazon. His current research focuses on applying Deep Reinforcement Learning to supply chain problems. Dhruv has also worked on developing generative and supervised deep learning models for probabilistic time series forecasting. In the past - Dhruv worked in the Quantitative Research team at Bloomberg LP, contributing to the open source Python and Jupyter community.
Apr. 26, 2023
Kamesh Munagala
Limits of an Information Intermediary in Auction Design
→ Abstract and Bio We study the classical Bayesian auction setting with a twist: Between the revenue maximizing seller and the buyers lies an intermediary that is better informed about the buyer values. The intermediary now segments the market by selectively releasing information to the seller, who still controls the auction. This process is called signaling and allows the seller to price discriminate. Though one would expect signaling to always help the seller, in the setting with one buyer, Bergemann, Brooks, and Morris [AER 2015] showed a remarkable result: Signaling can maximally help the buyer without really helping the seller. Specifically, there exists a signaling scheme where the seller's revenue does not increase but the item always sells, thereby maximizing the consumer (buyer) surplus. In this talk, we will explore whether such a result is possible in more general settings: First, when the type space of the buyer is “inter-dimensional” with a private budget or deadline in addition to a private value, and second, when there are multiple buyers in the auction. On the positive side, we show exact and approximation results via new signaling schemes, while on the negative side, we show impossibility results that capture the limits to which information intermediaries can help the buyers. Joint work with Reza Alijani, Sid Banerjee, Shao-Heng Ko, and Kangning Wang, and combines two papers that appeared in ACM EC 2022.

Bio: Kamesh Munagala is Professor of Computer Science at Duke University. He is broadly interested in algorithm design, particularly approximation and online algorithms, algorithmic fairness, and algorithmic game theory. He obtained his Ph.D. (2003) and M.S. (2002) from Stanford University, and B.Tech. (1998) from IIT Bombay. He is an ACM Distinguished Scientist (2019); a recipient of the NSF CAREER Award (2008) and the Alfred P. Sloan Fellowship (2009); and is a co-author on the best papers at the WINE 2018 and WWW 2009 conferences. He was a Visiting Research Professor at Twitter in 2012, served as the Director of Graduate Studies for Duke CS from 2012 to 2015, and currently serves as its Associate Chair.
May 3, 2023
Chara Podimata
The disparate effects of recommending to strategic users
→ Abstract and Bio Recommendation systems are pervasive in the digital economy. An important assumption in many deployed systems is that user consumption reflects user preferences in a static sense: users consume the content they like with no other considerations in mind. However, as we document in a large-scale online survey, user behavior departs from this model in a crucial way: users choose content strategically, with the goal of influencing their future recommendations. We model the user behavior as a two-stage noisy signaling Stackelberg game between the recommendation system and users: in a finite-length first stage, the recommendation system implements a preliminary recommendation policy, to which users respond by strategically consuming content to change the breakdown of content types recommended to them in the future. Based on the user’s preferences as learned in this first stage, in the second stage, the platform commits to a recommendation policy by which it will recommend content to the users. We show that users’ strategic behavior can affect the user experience: at equilibrium in this game, differences in users’ preferences become accentuated as they strategically consume content further outside the mainstream. This effect is particularly strong among users from (statistical) minorities, who must specifically avoid signaling interest in mainstream content in order to ensure the algorithm will show them content related to their minoritized identities. This puts minority users at an unfair disadvantage, where they cannot access mainstream content without the algorithm suppressing the content types that only they (as the minority) enjoy. We next propose three interventions that improve the recommendation quality (both on average and for minority groups) that account for strategic consumption:(1) Adopting a recommendation system policy that uses preferences from a prior, (2) Communicating to users that universally liked (“mainstream”) content will not be used as the basis of recommendation, and (3) Serving content that is personalized-enough yet expected to be liked in the beginning. Finally, we describe a methodology to inform applied theory modeling in incentive-aware learning settings with survey results. Based on joint work with Andreas Haupt (MIT) and Dylan Hadfield-Menell (MIT).

Bio: Chara Podimata is currently a FODSI postdoctoral fellow at UC Berkeley and is joining MIT Sloan as an Assistant Professor of OR/Stat in Fall 2023. She received her PhD from Harvard, advised by Yiling Chen. She is interested in social aspects of computing and more specifically, the effects of humans adapting to machine learning algorithms used for consequential decision-making. During her PhD, she interned at MSR and Google, and her research was supported by a Microsoft Dissertation Grant and a Siebel Scholarship. Outside of research, she spends her time adventuring with her pup, Terra.
May 10, 2023
Dave Holtz
Reducing Interference Bias in Online Marketplace Experiments using Cluster Randomization: Evidence from a Pricing Meta-Experiment on Airbnb
→ Abstract and Bio Online marketplace designers frequently run randomized experiments to measure the impact of proposed product changes. However, given that marketplaces are inherently connected, total average treatment effect (TATE) estimates obtained through individual-level randomized experiments may be biased due to violations of the stable unit treatment value assumption, a phenomenon we refer to as 'interference bias.' Cluster randomization, i.e., the practice of randomizing treatment assignment at the level of 'clusters' of similar individuals, is an established experiment design technique for countering interference bias in social networks, but it is unclear ex ante if it will be effective in marketplace settings. In this paper, we use a meta-experiment or 'experiment over experiments' conducted on Airbnb to both provide empirical evidence of interference bias in online market settings and assess the viability of cluster randomization as a tool for reducing interference bias in marketplace TATE estimates. Results from our meta-experiment indicate that at least 19.76% of the TATE estimate produced by an individual-randomized evaluation of the platform fee increase we study is attributable to interference bias and eliminated through the use of cluster randomization. We also find suggestive, non-statistically significant evidence that interference bias in seller-side experiments is more severe in demand-constrained markets, and that the efficacy of cluster randomization at reducing interference bias increases with cluster quality

Bio: David Holtz is an assistant professor in the Management of Organizations (MORS) and Entrepreneurship and Innovation groups at the Haas School of Business. He earned his PhD at the MIT Sloan School of Management, in the Information Technology (IT) group. He also holds an MA in Physics and Astronomy from Johns Hopkins University, and a BA in Physics from Princeton University. Holtz studies the design of online marketplaces and platforms using large-scale online field experiments and novel digital trace data. His research agenda focuses on online trust and reputation system design, the business and societal impacts of personalized recommendations, and the design and analysis of field experiments in online marketplaces. His work has appeared in a number of journals and conferences, including The Proceedings of the National Academy of Science and the ACM Conference on Economics and Computation, and has been covered by popular news outlets, such as MSNBC, The Washington Post, and the Boston Globe. Before returning to academia, Holtz spent time in the Bay Area working as a data scientist and product manager at a number of technology firms, including Airbnb (where he was one of the founding members of the company’s algorithmic pricing team) and TrialPay (acquired by Visa). In carrying out his research agenda, he continues to work closely with many leading firms in the tech sector, including Airbnb, Facebook, Spotify, Microsoft, and Etsy.
May 17, 2023
Yuri Faenza
Stable matchings in choice function models: algorithms, polyhedra, and an application to school choice
→ Abstract and Bio In the classical marriage model by Gale and Shapley, agents from one side of the market have a strict ordering of the agents from the other side of the market and vice-versa. The goal is to find a matching that satisfies a fairness condition known as stability. However, strict orders cannot model many preference patterns that arise in problems such as diversification of school cohorts, formation of teams, etc. Hence, much attention has recently been reserved to matching problems where preferences of agents have a more complex behavior, which can be described via certain choice functions. In the first part of this talk, I will investigate algorithmic properties of these models, showing that the classical combinatorial approach based on the distributive lattice of stable matchings and the description of the convex hull of stable matchings as an LP are intimately related. This approach may turn out to be of interest for other problems as well. In the second part of the talk, I will show how certain choice functions can be used to model school admission criteria that take into account well-defined diversity and fairness concerns. I will show the practical relevance of those choice functions by applying them to data from specialized high schools admission in New York City. Based on joint work with Swati Gupta (GA Tech & MIT) and Xuan Zhang (Meta Research).

Bio: Yuri Faenza obtained his Ph.D. from the Sapienza University of Rome and is currently an associate professor in the IEOR department at Columbia University. He works in discrete optimization, operations research, matching theory, market design, and their applications. His research has been funded by the NSF (including an NSF Career award), the ONR, the Swiss NSF, and by a Meta Research Award. He is the chair of Mixed-Integer Programming Society of the MOS.
June 7, 2023
Kuang Xu
Non-Stationary Bandit Learning via Predictive Sampling
→ Abstract and Bio Thompson sampling has proven effective across a wide range of stationary bandit environments. However, as we demonstrate in this paper, it can perform poorly when applied to non-stationary environments. We show that such failures are attributed to the fact that, when exploring, the algorithm does not differentiate actions based on how quickly the information acquired loses its usefulness due to non-stationarity. Building upon this insight, we propose predictive sampling, an algorithm that deprioritizes acquiring information that quickly loses usefulness. Theoretical guarantee on the performance of predictive sampling is established through a Bayesian regret bound. We provide versions of predictive sampling for which computations tractably scale to complex bandit environments of practical interest. Through numerical simulations, we demonstrate that predictive sampling outperforms Thompson sampling in all non-stationary environments examined. Joint work with Yueyang Liu and Benjamin Van Roy (MS&E, Stanford University).

Bio: Kuang Xu is an Associate Professor at the Stanford Graduate School of Business. His research focuses on principles for decision-making in a stochastic system, with applications to operations, experimentation and logistics. He has received a First Place in the INFORMS George E. Nicholson Student Paper Competition, a Best Paper Award as well as Outstanding Student Paper Award at ACM SIGMETRICS, and an ACM SIGMETRICS Rising Star Research Award. He currently serves as an Associate Editor for Operations Research and Management Science. Outside of academia, he has consulted as the chief data science advisor for Shipt and as a senior advisor for Uber.

2021-2022

Oct. 6, 2021
Nihar Shah
Two F-words in Peer Review (Fraud and Feedback)
→ Abstract and Bio In this talk, we present two major challenges in peer review, propose solutions with guarantees, and discuss important open problems. (1) Fraud: There have been several recent discoveries of fraud in peer review: A group of participants form a coalition, get assigned each other's papers by manipulating the system, and then accept each others' papers. We present an algorithm which mitigates such fraud by randomizing reviewer assignments, and does not rely on assumptions about the malicious behavior. The algorithm yields an optimal-quality assignment subject to the randomization constraints, and we will discuss experiments characterizing this tradeoff. (2) Feedback: Real-world systems rely on feedback about their performance for their continual improvement. A useful means of obtaining feedback about the peer-review process is to ask authors' opinions. However, author opinions are significantly biased by whether their paper was accepted. We formulate this problem and present algorithms to debias such feedback. Our work relies on the key observation that the direction of this bias is known: the program chairs know which authors' papers were accepted. An overview of research on peer review is available here http://bit.ly/PeerReviewOverview

Bio: Nihar B. Shah is an Assistant Professor in the Machine Learning and Computer Science departments at Carnegie Mellon University (CMU). His research interests span statistics, machine learning, information theory, and game theory, recently focusing on improving the peer-review process by designing computational methods, mathematical guarantees, experimental evaluations and deployments. He is a recipient of a Google Research Scholar Award 2021, an NSF CAREER Award 2020-25, the 2017 David J. Sakrison memorial prize from EECS Berkeley for a "truly outstanding and innovative PhD thesis", the Microsoft Research PhD Fellowship 2014-16, the Berkeley Fellowship 2011-13, the IEEE Data Storage Best Paper and Best Student Paper Awards for the years 2011/2012, and the SVC Aiya Medal 2010, and has supervised the Best Student Paper at AAMAS 2019.
Oct. 20, 2021
Katrina Ligett
Gaming Helps! Learning from Strategic Interactions in Natural Dynamics
→ Abstract and Bio Those who are being classified are sometimes aware of the classifiers that are being used on them, and may have incentive to change their behavior to try to improve the label they receive. The attitude towards such strategic behavior, both in practice and in theoretical treatment, has generally been quite negative, and this is one of the reasons that the internal workings of high-stakes classifiers are often shrouded in secrecy. However, intuitively, agents who strategically change their behavior in response to incentives set by classifiers may actually be doing everyone a favor: they are helping teach the classifier whether the variables that the classifier depends on are truly meaningful for the task at hand---that is, features that, when changed, affect the true label (as opposed to non-meaningful features that have no effect). Over time, this could push the system to develop better classifiers, and could push individuals to invest more in meaningful variables. We study this issue in an online regression setting. Joint work with Yahav Bechavod, Zhiwei Steven Wu, and Juba Ziani. Work appeared at AISTATS'21.

Bio: Katrina Ligett is an Associate Professor of Computer Science at the Hebrew University of Jerusalem, where she is also the Head of the Program on Internet & Society. Her research interests include data privacy, algorithmic fairness, machine learning theory, and algorithmic game theory. She received her PhD in computer science in 2009, from Carnegie Mellon University, and did a postdoc at Cornell University. Before joining the Hebrew University, Katrina was on the faculty in computer science and economics at Caltech.
Nov. 3, 2021
Douglas Guilbeault
How communication networks promote cross-cultural similarities: The case of category formation
→ Abstract and Bio Individuals vary widely in how they categorize novel phenomena. This individual variation has led canonical theories in cognitive and social science to suggest that communication in large social networks leads populations toward divergent category systems. Yet, anthropological data indicates that large, independent societies consistently arrive at similar categories across a range of topics. How is it possible for diverse populations, consisting of individuals with significant variation in how they view the world, to independently construct similar categories? Through a series of online experiments, I show how large communication networks within cultures can promote the formation of similar categories across cultures. I use the online “Grouping Game” to observe how people construct categories in both small and large populations when presented with the same novel images. I replicate this design for English-speaking subjects in the U.S. and Mandarin-speaking subjects in China. In both cultures, solitary individuals and small social groups produced highly divergent category systems. Yet, large social groups separately and consistently arrived at highly similar categories both within and across cultures. These findings are accurately predicted by a simple mathematical model of critical mass dynamics. Altogether, I show how large communication networks can filter lexical diversity among individuals to produce replicable society-level patterns, yielding unexpected implications for cultural evolution.

Bio: Douglas Guilbeault is an assistant professor in the management of organizations at the Berkeley Haas School of Business. His research focuses on how communication networks underlie the production and diffusion of cultural content, such as linguistic categories and social norms. His studies have been published in a number of top journals, including Nature Communications, The Proceedings of the National Academy of the Sciences, and Management Science. As well, he has received top research awards from the International Conference on Computational Social Science, the Cognitive Science Society, and Facebook. Douglas teaches People Analytics at Haas, and he has served as a People Analytics consultant for a variety of organizations.
Nov. 30, 2021
Vijay Vazirani (Tues 3 pm, joint with CS Theory)
Online Bipartite Matching and Adwords
→ Abstract and Bio Over the last three decades, the online bipartite matching (OBM) problem has emerged as a central problem in the area of Online Algorithms. Perhaps even more important is its role in the area of Matching-Based Market Design. The resurgence of this area, with the revolutions of the Internet and mobile computing, has opened up novel, path- breaking applications, and OBM has emerged as its paradigmatic algorithmic problem. In a 1990 joint paper with Richard Karp and Umesh Vazirani, we gave an optimal algorithm, called RANKING, for OBM, achieving a competitive ratio of (1 – 1/e); however, its analysis was difficult to comprehend. Over the years, several researchers simplified the analysis. We will start by presenting a “textbook quality” proof of RANKING. Its simplicity raises the possibility of extending RANKING all the way to a generalization of OBM called the adwords problem. This problem is both notoriously difficult and very significant, the latter because of its role in the AdWords marketplace of Google. We will show how far this endeavor has gone and what remains. We will also provide a broad overview of the area of Matching-Based Market Design and pinpoint the role of OBM. Based on: https://arxiv.org/pdf/2107.10777.pdf

Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his PhD from the University of California, Berkeley in 1983. He is currently a Distinguished Professor at the University of California, Irvine. Vazirani has made fundamental contributions to several areas of the theory of algorithms, including algorithmic matching theory, approximation algorithms and algorithmic game theory, as well as to complexity theory, in which he established, with Les Valiant, the hardness of unique solution instances of NP-complete problems. Over the last four years, he has been working on algorithms for matching markets. He is one of the founders of algorithmic game theory. In 2001 he published Approximation Algorithms, which is widely regarded as the definitive book on the topic. In 2007, he published the co-edited book Algorithmic Game Theory. Another co-edited book, Online and Matching-Based Market Design, will be published by Cambridge University Press in early 2022; see its flyer: https://www.ics.uci.edu/~vazirani/flyer.pdf
Feb. 23, 2022
Omer Tamuz
Private Private Information
→ Abstract and Bio In a private private information structure, agents’ signals contain no information about the signals of their peers. We study how informative such structures can be, and characterize those that are on the Pareto frontier, in the sense that it is impossible to give more information to any agent without violating privacy. In our main application, we show how to optimally disclose information about an unknown state under the constraint of not revealing anything about a correlated variable that contains sensitive information.

Bio: Omer Tamuz is a professor of economics and mathematics at Caltech.
Mar. 9, 2022
Ron Berman
Naive analytics equilibrium
→ Abstract and Bio We study interactions with uncertainty about demand sensitivity. In our solution concept (1) firms choose seemingly-optimal strategies given the level of sophistication of their data analytics, and (2) the levels of sophistication form best responses to one another. Under the ensuing equilibrium firms underestimate price elasticities and overestimate advertising effectiveness, as observed empirically. The misestimates cause firms to set prices too high and to over-advertise. In games with strategic complements (substitutes), profits Pareto dominate (are dominated by) those of the Nash equilibrium. Applying the model to team production games explains the prevalence of overconfidence among entrepreneurs and salespeople.

Bio: Ron Berman is an assistant professor of marketing at Wharton. He focuses his research on digital marketing and marketing analytics. Recently Ron has been investigating how firms assess and optimize marketing effectiveness through experiments, how curation algorithms may create filter-bubbles on social media, and how descriptive analytics affects online firm performance. His research has been published in top marketing journals such as Marketing Science and the Journal of Marketing Research and he is a member of the editorial boards of the Journal of Marketing Research and Quantitative Marketing and Economics. Ron disseminates his research by teaching Digital Marketing courses in undergrad, MBA and Executive Education programs, and is often invited by market leading firms including Google, Facebook, and Wayfair to share and discuss his research. Ron’s experience includes early-stage venture capital investing at Viola Ventures (formerly Carmel Ventures) and developing software for the Israeli Defense Forces (IDF). Ron is an active advisor and investor, involved with startups such as Desti (travel planning, acquired by Nokia), Zimperium (cyber security), Abakus (advertising attribution, acquired by SAP), Peerspace (P2P venue marketplace), Netlify (serverless website deployment), Stackbit (content management), cauzal.ai (conversion optimization) and Honeycomb Insurance (commercial real-estate insurance). Ron holds a PhD and MSc in Business Administration (Marketing) from the University of California, Berkeley, an MBA and MSc in Computer Science from Tel-Aviv University, and a BSc in Computer Science, Physics and Mathematics from the Hebrew University in Jerusalem.
Mar. 30, 2022
Christina Yu
Simple yet Efficient Graph Agnostic Estimators for Network Causal Inference - from Linear to Low Degree Polynomial Models
→ Abstract and Bio Randomized experiments are widely used to estimate causal effects of proposed "treatments" in domains spanning across physical sciences, social sciences, medicine, and technology industries. However, classical approaches to experimental design rely on critical independence assumptions that are violated when the outcome of an individual a may be affected by the treatment of another individual b, referred to as network interference. Under such network interference, naively using popular estimators and randomized experimental designs can result in significant bias and loss of efficiency. We consider heterogeneous linear and polynomial potential outcomes models for network interference, under which we propose simple estimators for the total treatment effect that output unbiased estimates with low variance under simple randomized designs. Our solution and statistical guarantees do not rely on restrictive network properties, allowing for highly connected graph structures. When the network is completely unknown, we provide a simple unbiased and efficient estimator under a staggered rollout randomized design, showing that the flexibility from experimentation implemented over time can remove any requirement of network knowledge. We believe our results are poised to impact current randomized experimentation strategies due to its simplicity and ease of implementation, wide applicability across different network structures, and its statistical guarantees under a flexible hierarchy of network interference models.

Bio: Christina Lee Yu is an Assistant Professor at Cornell University in the School of Operations Research and Information Engineering. Prior to Cornell, she was a postdoc at Microsoft Research New England. She received her PhD in 2017 and MS in 2013 in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in the Laboratory for Information and Decision Systems. She received her BS in Computer Science from California Institute of Technology in 2011. She received honorable mention for the 2018 INFORMS Dantzig Dissertation Award. She is a recipient of the 2021 Intel Rising Stars Award and a JPMorgan Faculty Research Award. Her research interests include algorithm design and analysis, high dimensional statistics, inference over networks, sequential decision making under uncertainty, online learning, and network causal inference.
Apr. 6, 2022
Constantinos Daskalakis
What does it take to be a good fisherman?
→ Abstract and Bio A reasonable approach to figure this out is to collect training data comprising features of fishermen and their daily catch, and then learn a model mapping fishermen features to the size of their catch. Reasonable as this approach may sound, it will most likely result in a biased model. The reason for this bias is that the training data will miss all those individuals who were not good enough at fishing and decided to become hunters (or do something else) instead. Such self-selection bias is pervasive. From understanding what it takes to be a good college student or company employee to learning from expert demonstrations and understanding strategic behavior in markets, data available for learning statistical models are the results of strategic decisions that have already operated on and filtered out some of the relevant data. I will discuss recent progress on some classical econometric challenges revolving around estimating linear models under self-selection bias, and identification of non-parametric auction models, and present several open directions for future investigation. This talk is based on joint works with Yeshwanth Cherapanamjeri, Andrew Ilyas, Manolis Zampetakis.

Bio: Constantinos (aka "Costis" with an accent on "i") Daskalakis is a Professor of Electrical Engineering and Computer Science at MIT. He holds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens, and a PhD in Electrical Engineering and Computer Science from UC Berkeley. He works on Computation Theory and its interface with Game Theory, Economics, Probability Theory, Machine Learning and Statistics. He has resolved long-standing open problems about the computational complexity of Nash equilibrium, and the mathematical structure and computational complexity of multi-item auctions. His current work focuses on high-dimensional statistics and learning from biased, dependent, or strategic data. He has been honored with the ACM Doctoral Dissertation Award, the Kalai Prize from the Game Theory Society, the Sloan Fellowship in Computer Science, the SIAM Outstanding Paper Prize, the Microsoft Research Faculty Fellowship, the Simons Investigator Award, the Rolf Nevanlinna Prize from the International Mathematical Union, the ACM Grace Murray Hopper Award, and the Bodossaki Foundation Distinguished Young Scientists Award.
Apr. 20, 2022
Ravi Jagadeesan
Matching and Prices
→ Abstract and Bio Indivisibilities and budget constraints are pervasive features of many matching markets. But when taken together, these features typically cause failures of gross substitutability—a standard condition on preferences imposed in most matching models. To accommodate budget constraints and other income effects, we analyze matching markets under a weaker condition: net substitutability. Although competitive equilibria do not generally exist in our setting, we show that stable outcomes always exist and are efficient. However, standard auctions and matching procedures, such as the Deferred Acceptance algorithm and the Cumulative Offer process, do not generally yield stable outcomes. We illustrate how the flexibility of prices is critical for our results. We also discuss how budget constraints and other income effects affect classic properties of stable outcomes. Joint work with Alex Teytelboym

Bio: Ravi Jagadeesan is a Postdoctoral Fellow at Stanford. Starting in July, he will be an Assistant Professor in the Department of Economics at Stanford. He completed his PhD in Business Economics at Harvard in Spring 2020. Before that, he graduated from Harvard with an A.B. in Mathematics and an A.M. in Statistics in Spring 2018.
Apr. 27, 2022
Federico Echenique
Empirical Welfare Economics
→ Abstract and Bio We provide an empirical analogue to the equality of marginal rates of substitution condition for Pareto optimality, and address related questions. Welfare economics relies on agents’ utility functions: we revisit classical questions in welfare economics, assuming access to agents’ past choices instead of their utility functions. Our main result considers whether there are convex preferences for which some candidate allocation is Pareto optimal. We show that this candidate allocation is possibly efficient if and only if it is efficient for the incomplete relation derived from the revealed preference relations and convexity. Similar ideas are used to address when the Kaldor criterion may be used to make welfare comparisons, what prices can be Walrasian equilibrium prices, and the possibility of a representative consumer when the income distribution is endogenous. Joint work with Chris Chambers

Bio: Federico Echenique is the Allen and Lenabelle Davis Professor of Economics at Caltech. Federico Echenique's research focuses on understanding economic models of agents and markets. He is interested in determining the testable implications of models and the relationship between different theoretical models and the data possibly used to testing them. He is also studying fairness and efficiency in discrete allocation problems, such as two-sided matching markets and one-sided object allocation. Echenique is active in research at the intersection of economics and computer science. Echenique is Licenciado en Economía from the Universidad de la República in Uruguay and holds a PhD in economics from UC Berkeley. Prior to joining the Caltech faculty, he was an assistant professor at the Universidad de la República from 2000 to 2002 and an assistant professor at the Universidad Torcuato Di Tella in Buenos Aires from 2001 to 2002. He served on the editorial boards for the American Economic Review, Econometrica, The Economic Journal, Economic Theory, and the Journal of Economic Theory, and he is currently a co-editor of Theoretical Economics. Echenique is also a fellow of the Econometric Society.

2020-2021

Sep. 30, 2020
Betsy Ogburn
Social network dependence, the replication crisis, and (in)valid inference
→ Abstract and Bio In the first part of this talk, we show that social network dependence can result in /spurious associations due to network dependence/, potentially contributing to replication crises across the health and social sciences. Researchers in these fields frequently sample subjects from one or a small number of communities, schools, hospitals, etc., and while many of the limitations of such convenience samples are well-known, the issue of statistical dependence due to social network ties has not previously been addressed. A paradigmatic example of this is the Framingham Heart Study (FHS). Using a statistic that we adapted to measure network dependence, we test for network dependence and for possible spurious associations in several of the thousands of influential papers published using FHS data. Results suggest that some of the many decades of research on coronary heart disease, other health outcomes, and peer influence using FHS data may suffer from spurious estimates of association and anticonservative uncertainty quantification due to unacknowledged network structure. But data with network dependence abounds, and in many settings researchers are explicitly interested in learning about social network dynamics. Therefore, there is high demand for methods for causal and statistical inference with social network data. The second part of the talk describes recent work on causal inference for observational data from a single social network, focusing on (1) new types of causal estimands that are of interest in social network settings, and (2) conditions under which central limit theorems hold and inference based on approximate normality is licensed.

Bio: Dr. Elizabeth (Betsy) Ogburn is currently an Associate Professor in the Department of Biostatistics at Johns Hopkins University. She is also the founder of the COVID-19 Collaboration Platform. She received her Ph.D. in Biostatistics from Harvard University, and her research interests include causal inference and epidemiological methods.
Oct. 7, 2020
Carrie Wu (SOAL Seminar)
Developing Data Efficient Algorithms for AI
→ Abstract and Bio Our increasingly ambitious goals in artificial intelligence motivate several key algorithmic challenges, such as: how do we design algorithms that make the best use of the data that is available, and how do we design algorithms that are empirically and theoretically effective on the kinds of data that we often see in practice, for example, data with temporal dependencies and data that follow distributions that are hard to describe. In this talk, I will give examples of algorithmic solutions that addresses some of these challenges. I will first present a theoretical analysis of rates of convergence for SGD with experience replay, which is a technique used in Reinforcement Learning to break temporal differences in data. I will then present an algorithm that solves Markov Decision Processes with nearly optimal sample and runtime guarantees. Lastly, I will present an algorithmic solution for estimating local density for an arbitrary dataset.
Oct. 14, 2020
Ruta Mehta
Nash Social Welfare Approximation for Strategic Agents
→ Abstract and Bio A central goal in the long literature on fair division is the design of mechanisms that implement fair outcomes, despite the participants' strategic behavior. We study this question by measuring the fairness of an allocation using the geometric mean of the agents' values, known as the Nash social welfare (NSW). This objective is maximized by widely known concepts such as the Nash bargaining solution, proportional fairness, and the competitive equilibrium with equal incomes; we focus on (approximately) implementing this objective and analyze the Trading Post mechanism. We consider allocating goods that are substitutes or complements and show that this mechanism achieves an approximation of 2 for concave utility functions, and becomes essentially optimal for complements, where it can reach (1+e) for any e > 0. Moreover, we show that the Nash equilibria of this mechanism are pure and provide individual fairness in the sense of proportionality. (Joint work with Simina Branzei and Vasilis Gkatzelis. To appear in Operations Research.)

Bio: Ruta Mehta is an assistant professor in CS at UIUC, working on algorithmic, complexity, strategic, fairness, and learning aspects of various game-theoretic and economic problems. Prior to this, she was a postdoctoral fellow at the Simons Institute, UC Berkeley (Aug'15 - Dec'15), and at Georgia Tech (Sept'12 - July'15). She did her Ph.D. from IIT-Bombay. Her Ph.D. thesis, titled 'Nash Equilibrium Computation in Various Games' won the ACM India Doctoral Dissertation Award, 2012. Other awards she received include Best Postdoctoral Fellow Award, 2014 at Georgia Tech, and the NSF CAREER Award, 2018.
Oct. 21, 2020
Karl Rohe
Vintage Factor Analysis with Varimax Performs Statistical Inference
→ Abstract and Bio Vintage Factor Analysis is nearly a century old and remains popular today with practitioners. A key step, the factor rotation, is historically controversial because it appears to be unidentifiable. This controversy goes back as far as Charles Spearman. The unidentifiability is still reported in all modern multivariate textbooks. This talk will overturn this controversy and provide a positive theory for PCA with a varimax rotation. Just as sparsity helps to find a solution in p>n regression, we show that sparsity resolves the rotational invariance of factor analysis. PCA + varimax is fast to compute and provides a unified spectral estimation strategy for Stochastic Blockmodels, topic models (LDA), and nonnegative matrix factorization. Moreover, the estimator is consistent for an even broader class of models and the old factor analysis diagnostics (which have been used for nearly a century) assess the identifiability. Paper: https://arxiv.org/abs/2004.05387

Bio: Karl Rohe is an Associate Professor of Statistics at the University of Wisconsin-Madison, with courtesy appointments in Journalism, Educational Psychology, and Electrical & Computer Engineering. He is an AE at JRSS-B and JASA. His PhD is from Berkeley in 2011, working with Bin Yu.
Oct. 28, 2020
Christopher Musco (OR Seminar)
Optimal Stochastic Trace Estimation
→ Abstract and Bio I will discuss algorithms for an important computational primitive in linear algebra: approximately computing the trace of an implicit matrix A that can only be accessed through matrix-vector multiplications. Trace approximation finds applications across machine learning and scientific computing, where it is used to compute matrix norms, spectral densities, log-determinants, triangle counts in graphs, and much more. In 1990, Hutchinson introduced an elegant randomized algorithm for the trace approximation problem that has become ubiquitous in practice. I will introduce a simple modified version of this algorithm that provides the same theoretical guarantees, but requires quadratically fewer matrix-vector multiplications, and performs far better in experiments. We pair this result with matching lower bounds based on reductions to communication complexity and hypothesis testing for spiked-covariance matrices. Our lower bounds fall into a broader research agenda of better understanding the computational complexity of basic linear algebra problems in the restricted 'matrix-vector query' model of computation, which generalizes common algorithmic frameworks like linear sketching and Krylov subspace methods. Joint work with Raphael A. Meyer, Cameron Musco, and David P. Woodruff. Paper: https://arxiv.org/abs/2010.09649
Nov. 4, 2020
Katy Craig (OR Seminar)
Gradient Flows in the Wasserstein Metric: From Discrete to Continuum via Regularization
→ Abstract and Bio Over the past ten years, optimal transport has become a fundamental tool in statistics and machine learning: the Wasserstein metric provides a new notion of distance for classifying distributions and a rich geometry for interpolating between them. In parallel, optimal transport has led to new theoretical results on the stability and long time behavior of partial differential equations through the theory of Wasserstein gradient flows. These two lines of research recently intersected in a series of works that characterized the dynamics of training neural networks with a single hidden layer as a Wasserstein gradient flow. In this talk, I will briefly introduce the mathematical theory of Wasserstein gradient flows and describe recent results on discrete to continuum limits. In particular, I will show how passing from the discrete to continuum limit by introducing an appropriate regularization can lead to faster rates of convergence, as well as novel, deterministic particle methods for diffusive processes.

Bio: Katy Craig is an assistant professor in the department of mathematics at the University of California, Santa Barbara. Prior to coming to UCSB, Katy received her Ph.D. from Rutgers University, held an NSF postdoc at UCLA, and held a UC President’s Postdoctoral Fellowship at UCSB.
Nov. 18, 2020
Guillaume Basse (RAIN/OR Joint Seminar 8:30 AM PT)
Displacement Effects in a Hot Spot Policing Intervention in Medellin: Inference and Pitfalls
→ Abstract and Bio In hot policing, resources are targeted at specific locations predicted to be at high risk of crime; so-called 'hot spots.' Rather than reduce overall crime, however, there is a concern that these interventions simply displace crime from the targeted locations to nearby non-hot spots. We address this question in the context of a large-scale randomized experiment in Medellin, Colombia, in which police were randomly assigned to increase patrols at a subset of possible hotspots. Estimating the displacement effects on control locations is difficult because the probability that a nearby hotspot is treated is a complex function of the underlying geography. While existing methods developed for this 'general interference' setting, especially Horvitz-Thompson (HT) estimators, have attractive theoretical properties, they can perform poorly in practice and mislead practitioners. In this talk, I explore the key pitfalls that practitioners should watch out for when conducting this type of analysis, and propose some ways to partially remedy them.

Bio: Guillaume Basse is an Assistant Professor in the MS&E and Statistics departments at Stanford. His research focuses broadly on Causal Inference and Design of Experiments in complex settings. He is particularly interested in complications arising when interventions spillover across space and / or time.
Dec. 2, 2020
José Correa
The Value of Observability in Dynamic Pricing
→ Abstract and Bio We consider a dynamic pricing problem where a firm sells one item to a single buyer in order to maximize expected revenues. The firm commits to a price function over an infinite horizon. The buyer arrives at some random time with a private value for the item. He is more impatient than the seller and strategizes the time of his purchase in order to maximize his expected utility, which implies either buying immediately or waiting to benefit from a lower price. We study how important is to observe the buyer arrival time in terms of the seller's expected revenue. When the seller can observe the arrival of the buyer, she can make the price function contingent on his arrival time. On the contrary, when the seller cannot observe the arrival, her price function is fixed at time zero for the whole horizon. The value of observability (VO) is defined as the worst case ratio between the expected revenue of the seller when she observes the buyer's arrival and that when she does not. Our main result establishes that in a very general setting about valuation and arrival time distributions, the value of observability is bounded by a small constant. To obtain this bound we fully characterize the observable arrival setting and use this solution to construct a random and periodic price function for the unobservable case. This is joint work with Dana Pizarro and Gustavo Vulcano

Bio: José Correa is a full professor in the Department of Industrial Engineering at Universidad de Chile. José obtained a mathematical engineering degree from Universidad the Chile in 1999 and a PhD in Operations Research from MIT in 2004. His research, focusing in algorithmic game theory and mechanism design, has received numerous awards including an ACM SIGecom best paper award, an INFORMS Transportation Science and Logistics best paper awards, a Tucker prize finalist, and research awards from Amazon and Google. José has given keynote talks at several institutions and conferences and has been in the program committee of international computer science conferences. He also serves and has served in the editorial board of some of the leading journals of his field: Mathematical Programming B, Mathematics of Operations Research (as Game Theory Area Editor), and Operations Research.
Feb. 17, 2021
Jean Pouget-Abadie
Design and Analysis of Bipartite (Market) Experiments
→ Abstract and Bio Bipartite experiments are randomized experiments where treatment is applied to one set of units, while outcomes are measured on a different set of units. The interactions between the treated units and the 'outcome' units can be captured by a bipartite graph. Bipartite experiments are a recent object of study in causal inference, whereby treatment is applied to one set of units and outcomes of interest are measured on a different set of units. These experiments are particularly useful in settings where strong interference effects occur between units of a bipartite graph. In market experiments for example, assigning treatment at the seller-level and measuring outcomes at the buyer-level (or vice-versa) may lead to causal models that better account for the interference that naturally occurs between buyers and sellers. In this talk, we will cover the motivation for and formal setting of bipartite experiments. Furthermore, we will explore possible design choices for such experiments, namely clustered randomized designs, as well as various unbiased inference methods.

Bio: Jean is a research scientist at Google NY, on the Algorithms and Optimization team, led by Vahab Mirrokni. Before Google, Jean was a PhD student in computer science at Harvard University, advised by Edoardo Airoldi and Salil Vadhan. Prior to that, he was an undergraduate at Ecole Polytechnique in Paris. His recent research interests focus on causal inference, particularly when interactions between units are present.
Feb. 24, 2021
Annie Liang
Data and Incentives
→ Abstract and Bio Markets for lending and insurance incentivize good behavior by forecasting risk on the basis of past outcomes. As "big data" expands the set of covariates used to predict risk, how will these incentives change? We show that "attribute" data which is informative about consumer quality tends to decrease effort, while "circumstance" data which predicts idiosyncratic shocks to outcomes tends to increase it. When covariates are independent, this effect is uniform across all consumers. Under more general forms of correlation, this effect continues to hold on average, but observation of a new covariate may lead to disparate impact---increasing effort for some consumer groups and decreasing it for others. A regulator can improve social welfare by restricting the use of either attribute or circumstance data, and by limiting the use of covariates with substantial disparate impact.

Bio: Annie Liang is an Assistant Professor of Economics and an Assistant Professor of Computer Science at Northwestern University. Her work focuses on economic theory and the application of machine learning techniques in the social sciences. She has studied the dynamics of strategic information acquisition, as well as the use of machine learning to evaluate and improve economic models.
Mar. 3, 2021
Arun Chanrashaker
Identifying the latent space geometry of network models through analysis of curvature
→ Abstract and Bio Statistically modeling networks, across numerous disciplines and contexts, is fundamentally challenging because of (often high-order) dependence between connections. A common approach assigns each person in the graph to a position on a low-dimensional manifold. Distance between individuals in this (latent) space is inversely proportional to the likelihood of forming a connection. The choice of the latent geometry (the manifold class, dimension, and curvature) has consequential impacts on the substantive conclusions of the model. More positive curvature in the manifold, for example, encourages more and tighter communities; negative curvature induces repulsion among nodes. Currently, however, the choice of the latent geometry is an a priori modeling assumption and there is limited guidance about how to make these choices in a data-driven way. In this work, we present a method to consistently estimate the manifold type, dimension, and curvature from an empirically relevant class of latent spaces: simply connected, complete Riemannian manifolds of constant curvature. Our core insight comes by representing the graph as a noisy distance matrix based on the ties between cliques. Leveraging results from statistical geometry, we develop hypothesis tests to determine whether the observed distances could plausibly be embedded isometrically in each of the candidate geometries. We explore the accuracy of our approach with simulations and then apply our approach to data-sets from economics and sociology as well as neuroscience. Paper: https://stanford.edu/~arungc/LCM.pdf

Bio: Arun Chandrasekhar is an Associate Professor of Economics at Stanford University. His work focuses on development economics. He studies the role that social networks play in developing countries. In particular, he is interested in how the economics of networks can help us understand information aggregation failures and breakdown of cooperation in the developing world.
Apr. 7, 2021
Michael Leung
Network Cluster-Robust Inference
→ Abstract and Bio Since network data commonly consists of observations on a single large network, researchers often partition the network into clusters in order to apply cluster-robust inference methods. All existing such methods require clusters to be asymptotically independent. We prove that for this requirement to hold, under certain conditions, it is necessary and sufficient for clusters to have low conductance, the ratio of edge boundary size to volume, which yields a measure of cluster quality. We show in simulations that, for important classes of networks lacking low-conductance clusters, cluster-robust methods can exhibit substantial size distortion, whereas for networks with such clusters, they outperform HAC estimators. To assess the existence of low-conductance clusters and construct them, we draw on results in spectral graph theory showing a close connection between conductance and the spectrum of the graph Laplacian. Based on these results, we propose to use the spectrum to compute the number of low-conductance clusters and spectral clustering to compute the clusters. We illustrate our results and proposed methods in simulations and empirical applications.

Bio: Michael Leung is an economist at USC whose work focuses on developing econometric methods for network data.
Apr. 14, 2021
Paul Milgrom
Investment Incentives in Near-Optimal Mechanisms
→ Abstract and Bio In many real-world resource allocation problems, optimization is computationally intractable, so any practical allocation mechanism must be based on an approximation algorithm. We study investment incentives in strategy-proof mechanisms that use such approximations. In sharp contrast with the Vickrey-Clark-Groves mechanism, for which individual returns on investments are aligned with social welfare, we find that some algorithms that approximate efficient allocation arbitrarily well can nevertheless create misaligned investment incentives that lead to arbitrarily bad overall outcomes. However, if a near-efficient algorithm "excludes bossy negative externalities," then its outcomes remain near-efficient even after accounting for investments. A weakening of this "XBONE" condition is necessary and sufficient for the result.

Bio: Paul Milgrom is the Shirley and Leonard Ely professor of Humanities and Sciences in the Department of Economics at Stanford University and professor, by courtesy, at both the Department of Management Science and Engineering and the Graduate School of Business. He is the world's leading auction designer, known for his work on auction theory and innovative resource allocation methods, particularly in radio spectrum. He is the co-recipient of the 2020 Nobel Prize in Economic Sciences, together with Robert Wilson, 'for improvements to auction theory and inventions of new auction formats.'
Apr. 21, 2021
Shipra Agrawal
Dynamic Pricing and Learning under the Bass Model
→ Abstract and Bio We consider a novel formulation of the dynamic pricing and demand learning problem, where the evolution of demand in response to posted prices is governed by a stochastic variant of the popular Bass model with parameters (α, β) that are linked to the so-called "innovation" and "imitation" effects. Unlike the more commonly used i.i.d. demand models, in this model the price posted not only affects the demand and the revenue in the current round but also the evolution of demand, and hence the fraction of market potential that can be captured, in future rounds. Finding a revenue-maximizing dynamic pricing policy in this model is non-trivial even when model parameters are known, and requires solving for the optimal non-stationary policy of a continuous-time, continuous-state MDP. In this paper, we consider the problem of dynamic pricing is used in conjunction with learning the model parameters, with the objective of optimizing the cumulative revenues over a given selling horizon. Our main contribution is an algorithm with a regret guarantee of O (m^2/3), where m is mnemonic for the (known) market size. Moreover, we show that no algorithm can incur smaller order of loss by deriving a matching lower bound. We observe that in this problem the market size m, and not the time horizon T, is the fundamental driver of the complexity; our lower bound in fact indicates that for any fixed α,β, most non-trivial instances of the problem have constant T and large m. This insight sets the problem setting considered here uniquely apart from the MAB type formulations typically considered in the learning to price literature.

Bio: Shipra Agrawal is Cyrus Derman Assistant Professor of the Department of Industrial Engineering and Operations Research. She is also affiliated with the Department of Computer Science and the Data Science Institute, at Columbia University. She received her Ph.D. from Stanford University in June 2011 under the guidance of Prof. Yinyu Ye and was a researcher at Microsoft Research India from 2011 to 2015. Her research spans several areas of optimization and machine learning, including online optimization under uncertainty, multi-armed bandits, online learning, and reinforcement learning. Shipra serves as an associate editor for Management Science, Mathematics of Operations Research, and INFORMS Journal on Optimization. Her research is supported by a Google Faculty Research Award, an Amazon research award, and an NSF CAREER Award.
Apr. 28, 2021
Lihua Lei
Hierarchical Community Detection for Heterogeneous and Multi-scaled Networks
→ Abstract and Bio Real-world networks are often hierarchical, heterogeneous, and multi-scaled, while the idealized stochastic block models that are extensively studied in the literature tend to be over-simplified. In a line of work, we propose several top-down recursive partitioning algorithms which start with the entire network and divide the nodes into two communities by certain spectral clustering methods repeatedly, until a stopping rule indicates no further community structures. For these algorithms, the number of communities does not need to be known a priori or estimated consistently. On a broad class of hierarchical network models, our algorithms are proved to achieve the exact recovery for sparse networks with expected node degrees logarithmic in the network size, and are computationally more efficient than non-hierarchical spectral clustering algorithms. More interestingly, we identify regimes where no algorithm can recover all communities simultaneously while our algorithm can still recover the mega-communities (unions of communities defined by the hierarchy) consistently without recovering the finest structure. Our theoretical results are based on the newly developed two-to-infinity eigenspace perturbation theory for binary random matrices with independent or dependent entries.

Bio: Lihua Lei is a postdoctoral researcher in Statistics at Stanford University, advised by Emmanuel Candès. Prior to joining Stanford, he obtained his Ph.D. in statistics at UC Berkeley, advised by Peter Bickel and Michael Jordan. His research areas include network analysis, causal inference, conformal inference, multiple hypothesis testing, and stochastic optimization.
May 5, 2021
Mukund Sundararajan
Using Attribution to Understand Deep Neural Networks
→ Abstract and Bio Predicting cancer from XRays seemed great Until we discovered the true reason. The model, in its glory, did fixate On radiologist markings - treason! We found the issue with attribution: By blaming pixels for the prediction (1,2,3,4,5,6). A complement'ry way to attribute, is to pay training data, a tribute (1). If you are int'rested in FTC, counterfactual theory, SGD Or Shapley values and fine kernel tricks, Please come attend, unless you have conflicts Should you build deep models down the road, Use attributions. Takes ten lines of code!

Bio: There once was an RS called MS, He studied models that are a mess, A director at Google. Accurate and frugal, Explanations are what he liked best.
May 26, 2021
Mark Braverman
Optimization-friendly generic mechanisms without money
→ Abstract and Bio Our goal is to develop a generic framework for converting modern gradient-descent based optimization algorithms into mechanisms where inputs come from self-interested agents. We focus on aggregating preferences from n players in a context without money. Special cases of this setting include voting, allocation of items by lottery, and matching. Our key technical contribution is a new meta-algorithm we call APEX (Adaptive Pricing Equalizing Externalities). The framework is sufficiently general to be combined with any optimization algorithm that is based on local search. In the talk I'll outline the algorithm, and open problem/research directions that it raises, with a particular focus towards mechanism design + ML. If time permits, I will discuss a special case of applying the framework to the problem of one-sided allocation with lotteries. In this case, we obtain a strengthening of the 1979 result by Hylland and Zeckhauser on allocation via a competitive equilibrium from equal incomes (CEEI). The [HZ79] result posits that there is a (fractional) allocation and a set of item prices such that the allocation is a competitive equilibrium given prices. We further show that there is always a reweighing of the players' utility values such that running the standard unit-demand VCG with reweighed utilities leads to a HZ-equilibrium prices. Interestingly, not all HZ competitive equilibria come from VCG prices.

Bio: Mark Braverman is a professor of Computer Science at Princeton University. He works primarily on building new connections between theoretical computer science and other disciplines, including information theory, algorithmic mechanism design, dynamical systems, analysis, and geometry. He received a 2013 Packard Fellowship, and a 2019 NSF Alan T. Waterman award.
June 2, 2021
Elena Manresa (12-1 pm PT)
An Adversarial Approach to Structural Estimation
→ Abstract and Bio We propose a new simulation-based estimation method, adversarial estimation, for structural models. The estimator is formulated as the solution to a minimax problem between a generator (which generates synthetic observations using the structural model) and a discriminator (which classifies if an observation is synthetic). The discriminator maximizes the accuracy of its classification while the generator minimizes it. We show that, with a sufficiently rich discriminator, the adversarial estimator attains parametric efficiency under correct specification and the parametric rate under misspecification. We advocate the use of a neural network as a discriminator that can exploit adaptivity properties and attain fast rates of convergence. We apply our method to the elderly’s saving decision model and show that including gender and health profiles in the discriminator uncovers the bequest motive as an important source of saving across the wealth distribution, not only for the rich.

2019-2020

Oct. 2, 2019
Shane Henderson
Under the Hood of Bike Sharing
Oct. 16, 2019
Suleyman Kerimov (first half)
Scrip Systems with Minimal Availability
Oct. 16, 2019
Nikhil Garg (second half)
Driver Surge Pricing
Oct. 30, 2019
Rupert Freeman
Truthful Aggregation of Budget Proposals
Nov. 13, 2019
Aleksandra Korolova
Societal Concerns in Targeted Advertising
Nov. 20, 2019
Ali Aouad (OR Seminar)
Click-Based MNL: Algorithmic Frameworks for Modeling Click Data in Assortment Optimization
Dec. 4, 2019
Peng Shi
Optimal Priority-Based Allocation Mechanisms
Jan. 15, 2020
Matt Weinberg
(a biased selection of) Recent Developments in Combinatorial Auctions
Jan. 22, 2020
Sham Kakade (OR Seminar)
Representation, Modeling, and Optimization in Reinforcement Learning
Jan. 29, 2020
Warren Powell (OR Seminar)
From Reinforcement Learning to Stochastic Optimization: A Universal Framework for Sequential Decision Analytics
Feb. 5, 2020
Haris Aziz
Fair and Efficient Allocation of Indivisible Goods and Chores
Feb. 12, 2020
Avi Mandelbaum (OR Seminar)
"Theompirical" Research of Service Systems
Feb. 19, 2020
Edward McFowland III
Estimating Causal Peer Influence in Homophilous Social Networks by Inferring Latent Locations
Feb. 26, 2020
Benjamin Plaut (first half)
Beyond the First Welfare Theorem: Counteracting Inequality in Markets
Feb. 26, 2020
Faidra Monachou (second half)
Discrimination in Online Markets: Effects of Social Bias on Learning from Reviews and Policy Design
Mar. 11, 2020
Kent Quanrud (OR Seminar)
TBA
Apr. 8, 2020
Sandra Gonzalez-Bailon
TBA
May 20, 2020
Chris Musco (OR Seminar)
TBA

Current talks can be found here. Further previous talks and abstracts prior to 2020 can be found here.