Stanford RAIN (Research on Algorithms and Incentives in Networks) Seminar Talk Archive

2021-2022

Oct. 6, 2021
Nihar Shah
Two F-words in Peer Review (Fraud and Feedback)
→ Abstract and Bio In this talk, we present two major challenges in peer review, propose solutions with guarantees, and discuss important open problems. (1) Fraud: There have been several recent discoveries of fraud in peer review: A group of participants form a coalition, get assigned each other's papers by manipulating the system, and then accept each others' papers. We present an algorithm which mitigates such fraud by randomizing reviewer assignments, and does not rely on assumptions about the malicious behavior. The algorithm yields an optimal-quality assignment subject to the randomization constraints, and we will discuss experiments characterizing this tradeoff. (2) Feedback: Real-world systems rely on feedback about their performance for their continual improvement. A useful means of obtaining feedback about the peer-review process is to ask authors' opinions. However, author opinions are significantly biased by whether their paper was accepted. We formulate this problem and present algorithms to debias such feedback. Our work relies on the key observation that the direction of this bias is known: the program chairs know which authors' papers were accepted. An overview of research on peer review is available here http://bit.ly/PeerReviewOverview

Bio: Nihar B. Shah is an Assistant Professor in the Machine Learning and Computer Science departments at Carnegie Mellon University (CMU). His research interests span statistics, machine learning, information theory, and game theory, recently focusing on improving the peer-review process by designing computational methods, mathematical guarantees, experimental evaluations and deployments. He is a recipient of a Google Research Scholar Award 2021, an NSF CAREER Award 2020-25, the 2017 David J. Sakrison memorial prize from EECS Berkeley for a "truly outstanding and innovative PhD thesis", the Microsoft Research PhD Fellowship 2014-16, the Berkeley Fellowship 2011-13, the IEEE Data Storage Best Paper and Best Student Paper Awards for the years 2011/2012, and the SVC Aiya Medal 2010, and has supervised the Best Student Paper at AAMAS 2019.
Oct. 20, 2021
Katrina Ligett
Gaming Helps! Learning from Strategic Interactions in Natural Dynamics
→ Abstract and Bio Those who are being classified are sometimes aware of the classifiers that are being used on them, and may have incentive to change their behavior to try to improve the label they receive. The attitude towards such strategic behavior, both in practice and in theoretical treatment, has generally been quite negative, and this is one of the reasons that the internal workings of high-stakes classifiers are often shrouded in secrecy. However, intuitively, agents who strategically change their behavior in response to incentives set by classifiers may actually be doing everyone a favor: they are helping teach the classifier whether the variables that the classifier depends on are truly meaningful for the task at hand---that is, features that, when changed, affect the true label (as opposed to non-meaningful features that have no effect). Over time, this could push the system to develop better classifiers, and could push individuals to invest more in meaningful variables. We study this issue in an online regression setting. Joint work with Yahav Bechavod, Zhiwei Steven Wu, and Juba Ziani. Work appeared at AISTATS'21.

Bio: Katrina Ligett is an Associate Professor of Computer Science at the Hebrew University of Jerusalem, where she is also the Head of the Program on Internet & Society. Her research interests include data privacy, algorithmic fairness, machine learning theory, and algorithmic game theory. She received her PhD in computer science in 2009, from Carnegie Mellon University, and did a postdoc at Cornell University. Before joining the Hebrew University, Katrina was on the faculty in computer science and economics at Caltech.
Nov. 3, 2021
Douglas Guilbeault
How communication networks promote cross-cultural similarities: The case of category formation
→ Abstract and Bio Individuals vary widely in how they categorize novel phenomena. This individual variation has led canonical theories in cognitive and social science to suggest that communication in large social networks leads populations toward divergent category systems. Yet, anthropological data indicates that large, independent societies consistently arrive at similar categories across a range of topics. How is it possible for diverse populations, consisting of individuals with significant variation in how they view the world, to independently construct similar categories? Through a series of online experiments, I show how large communication networks within cultures can promote the formation of similar categories across cultures. I use the online “Grouping Game” to observe how people construct categories in both small and large populations when presented with the same novel images. I replicate this design for English-speaking subjects in the U.S. and Mandarin-speaking subjects in China. In both cultures, solitary individuals and small social groups produced highly divergent category systems. Yet, large social groups separately and consistently arrived at highly similar categories both within and across cultures. These findings are accurately predicted by a simple mathematical model of critical mass dynamics. Altogether, I show how large communication networks can filter lexical diversity among individuals to produce replicable society-level patterns, yielding unexpected implications for cultural evolution.

Bio: Douglas Guilbeault is an assistant professor in the management of organizations at the Berkeley Haas School of Business. His research focuses on how communication networks underlie the production and diffusion of cultural content, such as linguistic categories and social norms. His studies have been published in a number of top journals, including Nature Communications, The Proceedings of the National Academy of the Sciences, and Management Science. As well, he has received top research awards from the International Conference on Computational Social Science, the Cognitive Science Society, and Facebook. Douglas teaches People Analytics at Haas, and he has served as a People Analytics consultant for a variety of organizations.
Nov. 30, 2021
Vijay Vazirani (Tues 3 pm, joint with CS Theory)
Online Bipartite Matching and Adwords
→ Abstract and Bio Over the last three decades, the online bipartite matching (OBM) problem has emerged as a central problem in the area of Online Algorithms. Perhaps even more important is its role in the area of Matching-Based Market Design. The resurgence of this area, with the revolutions of the Internet and mobile computing, has opened up novel, path- breaking applications, and OBM has emerged as its paradigmatic algorithmic problem. In a 1990 joint paper with Richard Karp and Umesh Vazirani, we gave an optimal algorithm, called RANKING, for OBM, achieving a competitive ratio of (1 – 1/e); however, its analysis was difficult to comprehend. Over the years, several researchers simplified the analysis. We will start by presenting a “textbook quality” proof of RANKING. Its simplicity raises the possibility of extending RANKING all the way to a generalization of OBM called the adwords problem. This problem is both notoriously difficult and very significant, the latter because of its role in the AdWords marketplace of Google. We will show how far this endeavor has gone and what remains. We will also provide a broad overview of the area of Matching-Based Market Design and pinpoint the role of OBM. Based on: https://arxiv.org/pdf/2107.10777.pdf

Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his PhD from the University of California, Berkeley in 1983. He is currently a Distinguished Professor at the University of California, Irvine. Vazirani has made fundamental contributions to several areas of the theory of algorithms, including algorithmic matching theory, approximation algorithms and algorithmic game theory, as well as to complexity theory, in which he established, with Les Valiant, the hardness of unique solution instances of NP-complete problems. Over the last four years, he has been working on algorithms for matching markets. He is one of the founders of algorithmic game theory. In 2001 he published Approximation Algorithms, which is widely regarded as the definitive book on the topic. In 2007, he published the co-edited book Algorithmic Game Theory. Another co-edited book, Online and Matching-Based Market Design, will be published by Cambridge University Press in early 2022; see its flyer: https://www.ics.uci.edu/~vazirani/flyer.pdf
Feb. 23, 2022
Omer Tamuz
Private Private Information
→ Abstract and Bio In a private private information structure, agents’ signals contain no information about the signals of their peers. We study how informative such structures can be, and characterize those that are on the Pareto frontier, in the sense that it is impossible to give more information to any agent without violating privacy. In our main application, we show how to optimally disclose information about an unknown state under the constraint of not revealing anything about a correlated variable that contains sensitive information.

Bio: Omer Tamuz is a professor of economics and mathematics at Caltech.
Mar. 9, 2022
Ron Berman
Naive analytics equilibrium
→ Abstract and Bio We study interactions with uncertainty about demand sensitivity. In our solution concept (1) firms choose seemingly-optimal strategies given the level of sophistication of their data analytics, and (2) the levels of sophistication form best responses to one another. Under the ensuing equilibrium firms underestimate price elasticities and overestimate advertising effectiveness, as observed empirically. The misestimates cause firms to set prices too high and to over-advertise. In games with strategic complements (substitutes), profits Pareto dominate (are dominated by) those of the Nash equilibrium. Applying the model to team production games explains the prevalence of overconfidence among entrepreneurs and salespeople.

Bio: Ron Berman is an assistant professor of marketing at Wharton. He focuses his research on digital marketing and marketing analytics. Recently Ron has been investigating how firms assess and optimize marketing effectiveness through experiments, how curation algorithms may create filter-bubbles on social media, and how descriptive analytics affects online firm performance. His research has been published in top marketing journals such as Marketing Science and the Journal of Marketing Research and he is a member of the editorial boards of the Journal of Marketing Research and Quantitative Marketing and Economics. Ron disseminates his research by teaching Digital Marketing courses in undergrad, MBA and Executive Education programs, and is often invited by market leading firms including Google, Facebook, and Wayfair to share and discuss his research. Ron’s experience includes early-stage venture capital investing at Viola Ventures (formerly Carmel Ventures) and developing software for the Israeli Defense Forces (IDF). Ron is an active advisor and investor, involved with startups such as Desti (travel planning, acquired by Nokia), Zimperium (cyber security), Abakus (advertising attribution, acquired by SAP), Peerspace (P2P venue marketplace), Netlify (serverless website deployment), Stackbit (content management), cauzal.ai (conversion optimization) and Honeycomb Insurance (commercial real-estate insurance). Ron holds a PhD and MSc in Business Administration (Marketing) from the University of California, Berkeley, an MBA and MSc in Computer Science from Tel-Aviv University, and a BSc in Computer Science, Physics and Mathematics from the Hebrew University in Jerusalem.
Mar. 30, 2022
Christina Yu
Simple yet Efficient Graph Agnostic Estimators for Network Causal Inference - from Linear to Low Degree Polynomial Models
→ Abstract and Bio Randomized experiments are widely used to estimate causal effects of proposed "treatments" in domains spanning across physical sciences, social sciences, medicine, and technology industries. However, classical approaches to experimental design rely on critical independence assumptions that are violated when the outcome of an individual a may be affected by the treatment of another individual b, referred to as network interference. Under such network interference, naively using popular estimators and randomized experimental designs can result in significant bias and loss of efficiency. We consider heterogeneous linear and polynomial potential outcomes models for network interference, under which we propose simple estimators for the total treatment effect that output unbiased estimates with low variance under simple randomized designs. Our solution and statistical guarantees do not rely on restrictive network properties, allowing for highly connected graph structures. When the network is completely unknown, we provide a simple unbiased and efficient estimator under a staggered rollout randomized design, showing that the flexibility from experimentation implemented over time can remove any requirement of network knowledge. We believe our results are poised to impact current randomized experimentation strategies due to its simplicity and ease of implementation, wide applicability across different network structures, and its statistical guarantees under a flexible hierarchy of network interference models.

Bio: Christina Lee Yu is an Assistant Professor at Cornell University in the School of Operations Research and Information Engineering. Prior to Cornell, she was a postdoc at Microsoft Research New England. She received her PhD in 2017 and MS in 2013 in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in the Laboratory for Information and Decision Systems. She received her BS in Computer Science from California Institute of Technology in 2011. She received honorable mention for the 2018 INFORMS Dantzig Dissertation Award. She is a recipient of the 2021 Intel Rising Stars Award and a JPMorgan Faculty Research Award. Her research interests include algorithm design and analysis, high dimensional statistics, inference over networks, sequential decision making under uncertainty, online learning, and network causal inference.
Apr. 6, 2022
Constantinos Daskalakis
What does it take to be a good fisherman?
→ Abstract and Bio A reasonable approach to figure this out is to collect training data comprising features of fishermen and their daily catch, and then learn a model mapping fishermen features to the size of their catch. Reasonable as this approach may sound, it will most likely result in a biased model. The reason for this bias is that the training data will miss all those individuals who were not good enough at fishing and decided to become hunters (or do something else) instead. Such self-selection bias is pervasive. From understanding what it takes to be a good college student or company employee to learning from expert demonstrations and understanding strategic behavior in markets, data available for learning statistical models are the results of strategic decisions that have already operated on and filtered out some of the relevant data. I will discuss recent progress on some classical econometric challenges revolving around estimating linear models under self-selection bias, and identification of non-parametric auction models, and present several open directions for future investigation. This talk is based on joint works with Yeshwanth Cherapanamjeri, Andrew Ilyas, Manolis Zampetakis.

Bio: Constantinos (aka "Costis" with an accent on "i") Daskalakis is a Professor of Electrical Engineering and Computer Science at MIT. He holds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens, and a PhD in Electrical Engineering and Computer Science from UC Berkeley. He works on Computation Theory and its interface with Game Theory, Economics, Probability Theory, Machine Learning and Statistics. He has resolved long-standing open problems about the computational complexity of Nash equilibrium, and the mathematical structure and computational complexity of multi-item auctions. His current work focuses on high-dimensional statistics and learning from biased, dependent, or strategic data. He has been honored with the ACM Doctoral Dissertation Award, the Kalai Prize from the Game Theory Society, the Sloan Fellowship in Computer Science, the SIAM Outstanding Paper Prize, the Microsoft Research Faculty Fellowship, the Simons Investigator Award, the Rolf Nevanlinna Prize from the International Mathematical Union, the ACM Grace Murray Hopper Award, and the Bodossaki Foundation Distinguished Young Scientists Award.
Apr. 20, 2022
Ravi Jagadeesan
Matching and Prices
→ Abstract and Bio Indivisibilities and budget constraints are pervasive features of many matching markets. But when taken together, these features typically cause failures of gross substitutability—a standard condition on preferences imposed in most matching models. To accommodate budget constraints and other income effects, we analyze matching markets under a weaker condition: net substitutability. Although competitive equilibria do not generally exist in our setting, we show that stable outcomes always exist and are efficient. However, standard auctions and matching procedures, such as the Deferred Acceptance algorithm and the Cumulative Offer process, do not generally yield stable outcomes. We illustrate how the flexibility of prices is critical for our results. We also discuss how budget constraints and other income effects affect classic properties of stable outcomes. Joint work with Alex Teytelboym

Bio: Ravi Jagadeesan is a Postdoctoral Fellow at Stanford. Starting in July, he will be an Assistant Professor in the Department of Economics at Stanford. He completed his PhD in Business Economics at Harvard in Spring 2020. Before that, he graduated from Harvard with an A.B. in Mathematics and an A.M. in Statistics in Spring 2018.
Apr. 27, 2022
Federico Echenique
Empirical Welfare Economics
→ Abstract and Bio We provide an empirical analogue to the equality of marginal rates of substitution condition for Pareto optimality, and address related questions. Welfare economics relies on agents’ utility functions: we revisit classical questions in welfare economics, assuming access to agents’ past choices instead of their utility functions. Our main result considers whether there are convex preferences for which some candidate allocation is Pareto optimal. We show that this candidate allocation is possibly efficient if and only if it is efficient for the incomplete relation derived from the revealed preference relations and convexity. Similar ideas are used to address when the Kaldor criterion may be used to make welfare comparisons, what prices can be Walrasian equilibrium prices, and the possibility of a representative consumer when the income distribution is endogenous. Joint work with Chris Chambers

Bio: Federico Echenique is the Allen and Lenabelle Davis Professor of Economics at Caltech. Federico Echenique's research focuses on understanding economic models of agents and markets. He is interested in determining the testable implications of models and the relationship between different theoretical models and the data possibly used to testing them. He is also studying fairness and efficiency in discrete allocation problems, such as two-sided matching markets and one-sided object allocation. Echenique is active in research at the intersection of economics and computer science. Echenique is Licenciado en Economía from the Universidad de la República in Uruguay and holds a PhD in economics from UC Berkeley. Prior to joining the Caltech faculty, he was an assistant professor at the Universidad de la República from 2000 to 2002 and an assistant professor at the Universidad Torcuato Di Tella in Buenos Aires from 2001 to 2002. He served on the editorial boards for the American Economic Review, Econometrica, The Economic Journal, Economic Theory, and the Journal of Economic Theory, and he is currently a co-editor of Theoretical Economics. Echenique is also a fellow of the Econometric Society.

2020-2021

Sep. 30, 2020
Betsy Ogburn
Social network dependence, the replication crisis, and (in)valid inference
→ Abstract and Bio In the first part of this talk, we show that social network dependence can result in /spurious associations due to network dependence/, potentially contributing to replication crises across the health and social sciences. Researchers in these fields frequently sample subjects from one or a small number of communities, schools, hospitals, etc., and while many of the limitations of such convenience samples are well-known, the issue of statistical dependence due to social network ties has not previously been addressed. A paradigmatic example of this is the Framingham Heart Study (FHS). Using a statistic that we adapted to measure network dependence, we test for network dependence and for possible spurious associations in several of the thousands of influential papers published using FHS data. Results suggest that some of the many decades of research on coronary heart disease, other health outcomes, and peer influence using FHS data may suffer from spurious estimates of association and anticonservative uncertainty quantification due to unacknowledged network structure. But data with network dependence abounds, and in many settings researchers are explicitly interested in learning about social network dynamics. Therefore, there is high demand for methods for causal and statistical inference with social network data. The second part of the talk describes recent work on causal inference for observational data from a single social network, focusing on (1) new types of causal estimands that are of interest in social network settings, and (2) conditions under which central limit theorems hold and inference based on approximate normality is licensed.

Bio: Dr. Elizabeth (Betsy) Ogburn is currently an Associate Professor in the Department of Biostatistics at Johns Hopkins University. She is also the founder of the COVID-19 Collaboration Platform. She received her Ph.D. in Biostatistics from Harvard University, and her research interests include causal inference and epidemiological methods.
Oct. 7, 2020
Carrie Wu (SOAL Seminar)
Developing Data Efficient Algorithms for AI
→ Abstract and Bio Our increasingly ambitious goals in artificial intelligence motivate several key algorithmic challenges, such as: how do we design algorithms that make the best use of the data that is available, and how do we design algorithms that are empirically and theoretically effective on the kinds of data that we often see in practice, for example, data with temporal dependencies and data that follow distributions that are hard to describe. In this talk, I will give examples of algorithmic solutions that addresses some of these challenges. I will first present a theoretical analysis of rates of convergence for SGD with experience replay, which is a technique used in Reinforcement Learning to break temporal differences in data. I will then present an algorithm that solves Markov Decision Processes with nearly optimal sample and runtime guarantees. Lastly, I will present an algorithmic solution for estimating local density for an arbitrary dataset.
Oct. 14, 2020
Ruta Mehta
Nash Social Welfare Approximation for Strategic Agents
→ Abstract and Bio A central goal in the long literature on fair division is the design of mechanisms that implement fair outcomes, despite the participants' strategic behavior. We study this question by measuring the fairness of an allocation using the geometric mean of the agents' values, known as the Nash social welfare (NSW). This objective is maximized by widely known concepts such as the Nash bargaining solution, proportional fairness, and the competitive equilibrium with equal incomes; we focus on (approximately) implementing this objective and analyze the Trading Post mechanism. We consider allocating goods that are substitutes or complements and show that this mechanism achieves an approximation of 2 for concave utility functions, and becomes essentially optimal for complements, where it can reach (1+e) for any e > 0. Moreover, we show that the Nash equilibria of this mechanism are pure and provide individual fairness in the sense of proportionality. (Joint work with Simina Branzei and Vasilis Gkatzelis. To appear in Operations Research.)

Bio: Ruta Mehta is an assistant professor in CS at UIUC, working on algorithmic, complexity, strategic, fairness, and learning aspects of various game-theoretic and economic problems. Prior to this, she was a postdoctoral fellow at the Simons Institute, UC Berkeley (Aug'15 - Dec'15), and at Georgia Tech (Sept'12 - July'15). She did her Ph.D. from IIT-Bombay. Her Ph.D. thesis, titled 'Nash Equilibrium Computation in Various Games' won the ACM India Doctoral Dissertation Award, 2012. Other awards she received include Best Postdoctoral Fellow Award, 2014 at Georgia Tech, and the NSF CAREER Award, 2018.
Oct. 21, 2020
Karl Rohe
Vintage Factor Analysis with Varimax Performs Statistical Inference
→ Abstract and Bio Vintage Factor Analysis is nearly a century old and remains popular today with practitioners. A key step, the factor rotation, is historically controversial because it appears to be unidentifiable. This controversy goes back as far as Charles Spearman. The unidentifiability is still reported in all modern multivariate textbooks. This talk will overturn this controversy and provide a positive theory for PCA with a varimax rotation. Just as sparsity helps to find a solution in p>n regression, we show that sparsity resolves the rotational invariance of factor analysis. PCA + varimax is fast to compute and provides a unified spectral estimation strategy for Stochastic Blockmodels, topic models (LDA), and nonnegative matrix factorization. Moreover, the estimator is consistent for an even broader class of models and the old factor analysis diagnostics (which have been used for nearly a century) assess the identifiability. Paper: https://arxiv.org/abs/2004.05387

Bio: Karl Rohe is an Associate Professor of Statistics at the University of Wisconsin-Madison, with courtesy appointments in Journalism, Educational Psychology, and Electrical & Computer Engineering. He is an AE at JRSS-B and JASA. His PhD is from Berkeley in 2011, working with Bin Yu.
Oct. 28, 2020
Christopher Musco (OR Seminar)
Optimal Stochastic Trace Estimation
→ Abstract and Bio I will discuss algorithms for an important computational primitive in linear algebra: approximately computing the trace of an implicit matrix A that can only be accessed through matrix-vector multiplications. Trace approximation finds applications across machine learning and scientific computing, where it is used to compute matrix norms, spectral densities, log-determinants, triangle counts in graphs, and much more. In 1990, Hutchinson introduced an elegant randomized algorithm for the trace approximation problem that has become ubiquitous in practice. I will introduce a simple modified version of this algorithm that provides the same theoretical guarantees, but requires quadratically fewer matrix-vector multiplications, and performs far better in experiments. We pair this result with matching lower bounds based on reductions to communication complexity and hypothesis testing for spiked-covariance matrices. Our lower bounds fall into a broader research agenda of better understanding the computational complexity of basic linear algebra problems in the restricted 'matrix-vector query' model of computation, which generalizes common algorithmic frameworks like linear sketching and Krylov subspace methods. Joint work with Raphael A. Meyer, Cameron Musco, and David P. Woodruff. Paper: https://arxiv.org/abs/2010.09649
Nov. 4, 2020
Katy Craig (OR Seminar)
Gradient Flows in the Wasserstein Metric: From Discrete to Continuum via Regularization
→ Abstract and Bio Over the past ten years, optimal transport has become a fundamental tool in statistics and machine learning: the Wasserstein metric provides a new notion of distance for classifying distributions and a rich geometry for interpolating between them. In parallel, optimal transport has led to new theoretical results on the stability and long time behavior of partial differential equations through the theory of Wasserstein gradient flows. These two lines of research recently intersected in a series of works that characterized the dynamics of training neural networks with a single hidden layer as a Wasserstein gradient flow. In this talk, I will briefly introduce the mathematical theory of Wasserstein gradient flows and describe recent results on discrete to continuum limits. In particular, I will show how passing from the discrete to continuum limit by introducing an appropriate regularization can lead to faster rates of convergence, as well as novel, deterministic particle methods for diffusive processes.

Bio: Katy Craig is an assistant professor in the department of mathematics at the University of California, Santa Barbara. Prior to coming to UCSB, Katy received her Ph.D. from Rutgers University, held an NSF postdoc at UCLA, and held a UC President’s Postdoctoral Fellowship at UCSB.
Nov. 18, 2020
Guillaume Basse (RAIN/OR Joint Seminar 8:30 AM PT)
Displacement Effects in a Hot Spot Policing Intervention in Medellin: Inference and Pitfalls
→ Abstract and Bio In hot policing, resources are targeted at specific locations predicted to be at high risk of crime; so-called 'hot spots.' Rather than reduce overall crime, however, there is a concern that these interventions simply displace crime from the targeted locations to nearby non-hot spots. We address this question in the context of a large-scale randomized experiment in Medellin, Colombia, in which police were randomly assigned to increase patrols at a subset of possible hotspots. Estimating the displacement effects on control locations is difficult because the probability that a nearby hotspot is treated is a complex function of the underlying geography. While existing methods developed for this 'general interference' setting, especially Horvitz-Thompson (HT) estimators, have attractive theoretical properties, they can perform poorly in practice and mislead practitioners. In this talk, I explore the key pitfalls that practitioners should watch out for when conducting this type of analysis, and propose some ways to partially remedy them.

Bio: Guillaume Basse is an Assistant Professor in the MS&E and Statistics departments at Stanford. His research focuses broadly on Causal Inference and Design of Experiments in complex settings. He is particularly interested in complications arising when interventions spillover across space and / or time.
Dec. 2, 2020
José Correa
The Value of Observability in Dynamic Pricing
→ Abstract and Bio We consider a dynamic pricing problem where a firm sells one item to a single buyer in order to maximize expected revenues. The firm commits to a price function over an infinite horizon. The buyer arrives at some random time with a private value for the item. He is more impatient than the seller and strategizes the time of his purchase in order to maximize his expected utility, which implies either buying immediately or waiting to benefit from a lower price. We study how important is to observe the buyer arrival time in terms of the seller's expected revenue. When the seller can observe the arrival of the buyer, she can make the price function contingent on his arrival time. On the contrary, when the seller cannot observe the arrival, her price function is fixed at time zero for the whole horizon. The value of observability (VO) is defined as the worst case ratio between the expected revenue of the seller when she observes the buyer's arrival and that when she does not. Our main result establishes that in a very general setting about valuation and arrival time distributions, the value of observability is bounded by a small constant. To obtain this bound we fully characterize the observable arrival setting and use this solution to construct a random and periodic price function for the unobservable case. This is joint work with Dana Pizarro and Gustavo Vulcano

Bio: José Correa is a full professor in the Department of Industrial Engineering at Universidad de Chile. José obtained a mathematical engineering degree from Universidad the Chile in 1999 and a PhD in Operations Research from MIT in 2004. His research, focusing in algorithmic game theory and mechanism design, has received numerous awards including an ACM SIGecom best paper award, an INFORMS Transportation Science and Logistics best paper awards, a Tucker prize finalist, and research awards from Amazon and Google. José has given keynote talks at several institutions and conferences and has been in the program committee of international computer science conferences. He also serves and has served in the editorial board of some of the leading journals of his field: Mathematical Programming B, Mathematics of Operations Research (as Game Theory Area Editor), and Operations Research.
Feb. 17, 2021
Jean Pouget-Abadie
Design and Analysis of Bipartite (Market) Experiments
→ Abstract and Bio Bipartite experiments are randomized experiments where treatment is applied to one set of units, while outcomes are measured on a different set of units. The interactions between the treated units and the 'outcome' units can be captured by a bipartite graph. Bipartite experiments are a recent object of study in causal inference, whereby treatment is applied to one set of units and outcomes of interest are measured on a different set of units. These experiments are particularly useful in settings where strong interference effects occur between units of a bipartite graph. In market experiments for example, assigning treatment at the seller-level and measuring outcomes at the buyer-level (or vice-versa) may lead to causal models that better account for the interference that naturally occurs between buyers and sellers. In this talk, we will cover the motivation for and formal setting of bipartite experiments. Furthermore, we will explore possible design choices for such experiments, namely clustered randomized designs, as well as various unbiased inference methods.

Bio: Jean is a research scientist at Google NY, on the Algorithms and Optimization team, led by Vahab Mirrokni. Before Google, Jean was a PhD student in computer science at Harvard University, advised by Edoardo Airoldi and Salil Vadhan. Prior to that, he was an undergraduate at Ecole Polytechnique in Paris. His recent research interests focus on causal inference, particularly when interactions between units are present.
Feb. 24, 2021
Annie Liang
Data and Incentives
→ Abstract and Bio Markets for lending and insurance incentivize good behavior by forecasting risk on the basis of past outcomes. As "big data" expands the set of covariates used to predict risk, how will these incentives change? We show that "attribute" data which is informative about consumer quality tends to decrease effort, while "circumstance" data which predicts idiosyncratic shocks to outcomes tends to increase it. When covariates are independent, this effect is uniform across all consumers. Under more general forms of correlation, this effect continues to hold on average, but observation of a new covariate may lead to disparate impact---increasing effort for some consumer groups and decreasing it for others. A regulator can improve social welfare by restricting the use of either attribute or circumstance data, and by limiting the use of covariates with substantial disparate impact.

Bio: Annie Liang is an Assistant Professor of Economics and an Assistant Professor of Computer Science at Northwestern University. Her work focuses on economic theory and the application of machine learning techniques in the social sciences. She has studied the dynamics of strategic information acquisition, as well as the use of machine learning to evaluate and improve economic models.
Mar. 3, 2021
Arun Chanrashaker
Identifying the latent space geometry of network models through analysis of curvature
→ Abstract and Bio Statistically modeling networks, across numerous disciplines and contexts, is fundamentally challenging because of (often high-order) dependence between connections. A common approach assigns each person in the graph to a position on a low-dimensional manifold. Distance between individuals in this (latent) space is inversely proportional to the likelihood of forming a connection. The choice of the latent geometry (the manifold class, dimension, and curvature) has consequential impacts on the substantive conclusions of the model. More positive curvature in the manifold, for example, encourages more and tighter communities; negative curvature induces repulsion among nodes. Currently, however, the choice of the latent geometry is an a priori modeling assumption and there is limited guidance about how to make these choices in a data-driven way. In this work, we present a method to consistently estimate the manifold type, dimension, and curvature from an empirically relevant class of latent spaces: simply connected, complete Riemannian manifolds of constant curvature. Our core insight comes by representing the graph as a noisy distance matrix based on the ties between cliques. Leveraging results from statistical geometry, we develop hypothesis tests to determine whether the observed distances could plausibly be embedded isometrically in each of the candidate geometries. We explore the accuracy of our approach with simulations and then apply our approach to data-sets from economics and sociology as well as neuroscience. Paper: https://stanford.edu/~arungc/LCM.pdf

Bio: Arun Chandrasekhar is an Associate Professor of Economics at Stanford University. His work focuses on development economics. He studies the role that social networks play in developing countries. In particular, he is interested in how the economics of networks can help us understand information aggregation failures and breakdown of cooperation in the developing world.
Apr. 7, 2021
Michael Leung
Network Cluster-Robust Inference
→ Abstract and Bio Since network data commonly consists of observations on a single large network, researchers often partition the network into clusters in order to apply cluster-robust inference methods. All existing such methods require clusters to be asymptotically independent. We prove that for this requirement to hold, under certain conditions, it is necessary and sufficient for clusters to have low conductance, the ratio of edge boundary size to volume, which yields a measure of cluster quality. We show in simulations that, for important classes of networks lacking low-conductance clusters, cluster-robust methods can exhibit substantial size distortion, whereas for networks with such clusters, they outperform HAC estimators. To assess the existence of low-conductance clusters and construct them, we draw on results in spectral graph theory showing a close connection between conductance and the spectrum of the graph Laplacian. Based on these results, we propose to use the spectrum to compute the number of low-conductance clusters and spectral clustering to compute the clusters. We illustrate our results and proposed methods in simulations and empirical applications.

Bio: Michael Leung is an economist at USC whose work focuses on developing econometric methods for network data.
Apr. 14, 2021
Paul Milgrom
Investment Incentives in Near-Optimal Mechanisms
→ Abstract and Bio In many real-world resource allocation problems, optimization is computationally intractable, so any practical allocation mechanism must be based on an approximation algorithm. We study investment incentives in strategy-proof mechanisms that use such approximations. In sharp contrast with the Vickrey-Clark-Groves mechanism, for which individual returns on investments are aligned with social welfare, we find that some algorithms that approximate efficient allocation arbitrarily well can nevertheless create misaligned investment incentives that lead to arbitrarily bad overall outcomes. However, if a near-efficient algorithm "excludes bossy negative externalities," then its outcomes remain near-efficient even after accounting for investments. A weakening of this "XBONE" condition is necessary and sufficient for the result.

Bio: Paul Milgrom is the Shirley and Leonard Ely professor of Humanities and Sciences in the Department of Economics at Stanford University and professor, by courtesy, at both the Department of Management Science and Engineering and the Graduate School of Business. He is the world's leading auction designer, known for his work on auction theory and innovative resource allocation methods, particularly in radio spectrum. He is the co-recipient of the 2020 Nobel Prize in Economic Sciences, together with Robert Wilson, 'for improvements to auction theory and inventions of new auction formats.'
Apr. 21, 2021
Shipra Agrawal
Dynamic Pricing and Learning under the Bass Model
→ Abstract and Bio We consider a novel formulation of the dynamic pricing and demand learning problem, where the evolution of demand in response to posted prices is governed by a stochastic variant of the popular Bass model with parameters (α, β) that are linked to the so-called "innovation" and "imitation" effects. Unlike the more commonly used i.i.d. demand models, in this model the price posted not only affects the demand and the revenue in the current round but also the evolution of demand, and hence the fraction of market potential that can be captured, in future rounds. Finding a revenue-maximizing dynamic pricing policy in this model is non-trivial even when model parameters are known, and requires solving for the optimal non-stationary policy of a continuous-time, continuous-state MDP. In this paper, we consider the problem of dynamic pricing is used in conjunction with learning the model parameters, with the objective of optimizing the cumulative revenues over a given selling horizon. Our main contribution is an algorithm with a regret guarantee of O (m^2/3), where m is mnemonic for the (known) market size. Moreover, we show that no algorithm can incur smaller order of loss by deriving a matching lower bound. We observe that in this problem the market size m, and not the time horizon T, is the fundamental driver of the complexity; our lower bound in fact indicates that for any fixed α,β, most non-trivial instances of the problem have constant T and large m. This insight sets the problem setting considered here uniquely apart from the MAB type formulations typically considered in the learning to price literature.

Bio: Shipra Agrawal is Cyrus Derman Assistant Professor of the Department of Industrial Engineering and Operations Research. She is also affiliated with the Department of Computer Science and the Data Science Institute, at Columbia University. She received her Ph.D. from Stanford University in June 2011 under the guidance of Prof. Yinyu Ye and was a researcher at Microsoft Research India from 2011 to 2015. Her research spans several areas of optimization and machine learning, including online optimization under uncertainty, multi-armed bandits, online learning, and reinforcement learning. Shipra serves as an associate editor for Management Science, Mathematics of Operations Research, and INFORMS Journal on Optimization. Her research is supported by a Google Faculty Research Award, an Amazon research award, and an NSF CAREER Award.
Apr. 28, 2021
Lihua Lei
Hierarchical Community Detection for Heterogeneous and Multi-scaled Networks
→ Abstract and Bio Real-world networks are often hierarchical, heterogeneous, and multi-scaled, while the idealized stochastic block models that are extensively studied in the literature tend to be over-simplified. In a line of work, we propose several top-down recursive partitioning algorithms which start with the entire network and divide the nodes into two communities by certain spectral clustering methods repeatedly, until a stopping rule indicates no further community structures. For these algorithms, the number of communities does not need to be known a priori or estimated consistently. On a broad class of hierarchical network models, our algorithms are proved to achieve the exact recovery for sparse networks with expected node degrees logarithmic in the network size, and are computationally more efficient than non-hierarchical spectral clustering algorithms. More interestingly, we identify regimes where no algorithm can recover all communities simultaneously while our algorithm can still recover the mega-communities (unions of communities defined by the hierarchy) consistently without recovering the finest structure. Our theoretical results are based on the newly developed two-to-infinity eigenspace perturbation theory for binary random matrices with independent or dependent entries.

Bio: Lihua Lei is a postdoctoral researcher in Statistics at Stanford University, advised by Emmanuel Candès. Prior to joining Stanford, he obtained his Ph.D. in statistics at UC Berkeley, advised by Peter Bickel and Michael Jordan. His research areas include network analysis, causal inference, conformal inference, multiple hypothesis testing, and stochastic optimization.
May 5, 2021
Mukund Sundararajan
Using Attribution to Understand Deep Neural Networks
→ Abstract and Bio Predicting cancer from XRays seemed great Until we discovered the true reason. The model, in its glory, did fixate On radiologist markings - treason! We found the issue with attribution: By blaming pixels for the prediction (1,2,3,4,5,6). A complement'ry way to attribute, is to pay training data, a tribute (1). If you are int'rested in FTC, counterfactual theory, SGD Or Shapley values and fine kernel tricks, Please come attend, unless you have conflicts Should you build deep models down the road, Use attributions. Takes ten lines of code!

Bio: There once was an RS called MS, He studied models that are a mess, A director at Google. Accurate and frugal, Explanations are what he liked best.
May 26, 2021
Mark Braverman
Optimization-friendly generic mechanisms without money
→ Abstract and Bio Our goal is to develop a generic framework for converting modern gradient-descent based optimization algorithms into mechanisms where inputs come from self-interested agents. We focus on aggregating preferences from n players in a context without money. Special cases of this setting include voting, allocation of items by lottery, and matching. Our key technical contribution is a new meta-algorithm we call APEX (Adaptive Pricing Equalizing Externalities). The framework is sufficiently general to be combined with any optimization algorithm that is based on local search. In the talk I'll outline the algorithm, and open problem/research directions that it raises, with a particular focus towards mechanism design + ML. If time permits, I will discuss a special case of applying the framework to the problem of one-sided allocation with lotteries. In this case, we obtain a strengthening of the 1979 result by Hylland and Zeckhauser on allocation via a competitive equilibrium from equal incomes (CEEI). The [HZ79] result posits that there is a (fractional) allocation and a set of item prices such that the allocation is a competitive equilibrium given prices. We further show that there is always a reweighing of the players' utility values such that running the standard unit-demand VCG with reweighed utilities leads to a HZ-equilibrium prices. Interestingly, not all HZ competitive equilibria come from VCG prices.

Bio: Mark Braverman is a professor of Computer Science at Princeton University. He works primarily on building new connections between theoretical computer science and other disciplines, including information theory, algorithmic mechanism design, dynamical systems, analysis, and geometry. He received a 2013 Packard Fellowship, and a 2019 NSF Alan T. Waterman award.
June 2, 2021
Elena Manresa (12-1 pm PT)
An Adversarial Approach to Structural Estimation
→ Abstract and Bio We propose a new simulation-based estimation method, adversarial estimation, for structural models. The estimator is formulated as the solution to a minimax problem between a generator (which generates synthetic observations using the structural model) and a discriminator (which classifies if an observation is synthetic). The discriminator maximizes the accuracy of its classification while the generator minimizes it. We show that, with a sufficiently rich discriminator, the adversarial estimator attains parametric efficiency under correct specification and the parametric rate under misspecification. We advocate the use of a neural network as a discriminator that can exploit adaptivity properties and attain fast rates of convergence. We apply our method to the elderly’s saving decision model and show that including gender and health profiles in the discriminator uncovers the bequest motive as an important source of saving across the wealth distribution, not only for the rich.

2019-2020

Oct. 2, 2019
Shane Henderson
Under the Hood of Bike Sharing
Oct. 16, 2019
Suleyman Kerimov (first half)
Scrip Systems with Minimal Availability
Oct. 16, 2019
Nikhil Garg (second half)
Driver Surge Pricing
Oct. 30, 2019
Rupert Freeman
Truthful Aggregation of Budget Proposals
Nov. 13, 2019
Aleksandra Korolova
Societal Concerns in Targeted Advertising
Nov. 20, 2019
Ali Aouad (OR Seminar)
Click-Based MNL: Algorithmic Frameworks for Modeling Click Data in Assortment Optimization
Dec. 4, 2019
Peng Shi
Optimal Priority-Based Allocation Mechanisms
Jan. 15, 2020
Matt Weinberg
(a biased selection of) Recent Developments in Combinatorial Auctions
Jan. 22, 2020
Sham Kakade (OR Seminar)
Representation, Modeling, and Optimization in Reinforcement Learning
Jan. 29, 2020
Warren Powell (OR Seminar)
From Reinforcement Learning to Stochastic Optimization: A Universal Framework for Sequential Decision Analytics
Feb. 5, 2020
Haris Aziz
Fair and Efficient Allocation of Indivisible Goods and Chores
Feb. 12, 2020
Avi Mandelbaum (OR Seminar)
"Theompirical" Research of Service Systems
Feb. 19, 2020
Edward McFowland III
Estimating Causal Peer Influence in Homophilous Social Networks by Inferring Latent Locations
Feb. 26, 2020
Benjamin Plaut (first half)
Beyond the First Welfare Theorem: Counteracting Inequality in Markets
Feb. 26, 2020
Faidra Monachou (second half)
Discrimination in Online Markets: Effects of Social Bias on Learning from Reviews and Policy Design
Mar. 11, 2020
Kent Quanrud (OR Seminar)
TBA
Apr. 8, 2020
Sandra Gonzalez-Bailon
TBA
May 20, 2020
Chris Musco (OR Seminar)
TBA

Current talks can be found here. Further previous talks and abstracts prior to 2020 can be found here.