Stanford RAIN (Research on Algorithms and Incentives in Networks) Seminar

The Internet is a complex network made of both machines and people, and hence, problems in this domain often require techniques from both algorithms and the social sciences. The RAIN (Research on Algorithms and Incentives in Networks) seminar, supported by SOAL, provides a gathering place for talks and social discussion in this area.

Upcoming Talks

Apr. 5, 2023
Christopher Harshaw
A New Design-Based Framework for Randomized Experiments and the Riesz Estimator
→ Abstract and Bio Randomized experiments are widely used for investigating causal quantities in a variety of settings, from clinical trials and public policy evaluations to development economics and product development. A standard assumption in designing and analyzing such experiments is that of 'no-interference', which states that a participant’s outcome is only affected by their individual treatment and not by the treatments given to other participants in the experiment. Although standard, this assumption does not hold in many experimental settings, e.g. studies of the effect of vaccines where disease may spread between the participants or cash transfer programs where participants may affect local economies containing other participants. In this talk, I will present a new design-based experimental framework for formulating and investigating causal quantities under complex interference. The framework is expressive in the sense that it allows experimenters to define and investigate new causal estimands based on continuous or discrete treatment assignments under rich and complex interference structures. We present the Riesz Estimator, which is a unified approach to estimation based on insights from functional analysis. Finally, we derive necessary statistical tools (CLT, variance estimators) to construct asymptotically valid confidence intervals for the causal estimand. Joint work with Fredrik Sävje and Yitan Wang.

Bio: Christopher Harshaw is a FODSI postdoctoral fellow hosted jointly between UC Berkeley and MIT. He obtained his PhD in Computer Science from Yale University, where he was advised by Daniel Spielman and Amin Karbasi. His research addresses theoretical aspects of causal inference and data science, particularly algorithmic and statistical problems arising in the design and analysis of randomized experiments.
Apr. 12, 2023
Dhruv Madeka
Deep Reinforcement Learning for Real-World Inventory Management
→ Abstract and Bio We present a Deep Reinforcement Learning approach to solving a periodic review inventory control system with stochastic vendor lead times, lost sales, correlated demand, and price matching. While this dynamic program has historically been considered intractable, we show that several policy learning approaches are competitive with or outperform classical baseline approaches. In order to train these algorithms, we develop novel techniques to convert historical data into a simulator and present a collection of results that motivate this approach. We also present a model-based reinforcement learning procedure (Direct Backprop) to solve the dynamic periodic review inventory control problem by constructing a differentiable simulator. Under a variety of metrics Direct Backprop outperforms model-free RL and newsvendor baselines, in both simulations and real-world deployments.

Bio: Dhruv is a Principal Machine Learning Scientist at Amazon. His current research focuses on applying Deep Reinforcement Learning to supply chain problems. Dhruv has also worked on developing generative and supervised deep learning models for probabilistic time series forecasting. In the past - Dhruv worked in the Quantitative Research team at Bloomberg LP, contributing to the open source Python and Jupyter community.
Apr. 26, 2023
Kamesh Munagala
Online Learning and Bandits with Queried Hints
→ Abstract and Bio We consider the classic online learning and stochastic multi-armed bandit (MAB) problems, when at each step, the online policy can probe and obtain advice about which of a small number (k) of choices has better reward (or loss) before making its choice. In this model, we derive algorithms whose regret bounds have exponentially better dependence on the time horizon compared to the classic regret bounds. In particular, we show that probing with k=2 suffices to achieve time-independent regret bounds for online linear and convex optimization. For stochastic MAB with independent arms, we show that the same number of probes improves the dependence of regret on the horizon length T from O(sqrt T) to O(log T). For stochastic MAB, we also consider a stronger model where a probe reveals the reward values of the probed arms, and show that in this case, k=3 probes suffice to achieve parameter-independent constant regret. Such regret bounds cannot be achieved even with full feedback after the play, showcasing the power of limited 'advice' via probing before making the play. We also present extensions to the setting where such advice can be imperfect. Joint work with Aditya Bhaskara (Utah), Sreenivas Gollapudi (Google), Sungjin Im (UC Merced), and Kostas Kollias (Google).

Bio: Kamesh Munagala is Professor of Computer Science at Duke University. He is broadly interested in algorithm design, particularly approximation and online algorithms, algorithmic fairness, and algorithmic game theory. He obtained his Ph.D. (2003) and M.S. (2002) from Stanford University, and B.Tech. (1998) from IIT Bombay. He is an ACM Distinguished Scientist (2019); a recipient of the NSF CAREER Award (2008) and the Alfred P. Sloan Fellowship (2009); and is a co-author on the best papers at the WINE 2018 and WWW 2009 conferences. He was a Visiting Research Professor at Twitter in 2012, served as the Director of Graduate Studies for Duke CS from 2012 to 2015, and currently serves as its Associate Chair.
May 10, 2023
Dave Holtz
Reducing Interference Bias in Online Marketplace Experiments using Cluster Randomization: Evidence from a Pricing Meta-Experiment on Airbnb
→ Abstract and Bio Online marketplace designers frequently run randomized experiments to measure the impact of proposed product changes. However, given that marketplaces are inherently connected, total average treatment effect (TATE) estimates obtained through individual-level randomized experiments may be biased due to violations of the stable unit treatment value assumption, a phenomenon we refer to as 'interference bias.' Cluster randomization, i.e., the practice of randomizing treatment assignment at the level of 'clusters' of similar individuals, is an established experiment design technique for countering interference bias in social networks, but it is unclear ex ante if it will be effective in marketplace settings. In this paper, we use a meta-experiment or 'experiment over experiments' conducted on Airbnb to both provide empirical evidence of interference bias in online market settings and assess the viability of cluster randomization as a tool for reducing interference bias in marketplace TATE estimates. Results from our meta-experiment indicate that at least 19.76% of the TATE estimate produced by an individual-randomized evaluation of the platform fee increase we study is attributable to interference bias and eliminated through the use of cluster randomization. We also find suggestive, non-statistically significant evidence that interference bias in seller-side experiments is more severe in demand-constrained markets, and that the efficacy of cluster randomization at reducing interference bias increases with cluster quality

Bio: David Holtz is an assistant professor in the Management of Organizations (MORS) and Entrepreneurship and Innovation groups at the Haas School of Business. He earned his PhD at the MIT Sloan School of Management, in the Information Technology (IT) group. He also holds an MA in Physics and Astronomy from Johns Hopkins University, and a BA in Physics from Princeton University. Holtz studies the design of online marketplaces and platforms using large-scale online field experiments and novel digital trace data. His research agenda focuses on online trust and reputation system design, the business and societal impacts of personalized recommendations, and the design and analysis of field experiments in online marketplaces. His work has appeared in a number of journals and conferences, including The Proceedings of the National Academy of Science and the ACM Conference on Economics and Computation, and has been covered by popular news outlets, such as MSNBC, The Washington Post, and the Boston Globe. Before returning to academia, Holtz spent time in the Bay Area working as a data scientist and product manager at a number of technology firms, including Airbnb (where he was one of the founding members of the company’s algorithmic pricing team) and TrialPay (acquired by Visa). In carrying out his research agenda, he continues to work closely with many leading firms in the tech sector, including Airbnb, Facebook, Spotify, Microsoft, and Etsy.
May 17, 2023
Yuri Faenza
TBD
→ Abstract and Bio TBD
May 24, 2023
Suresh Venkatasubramanian
TBD
→ Abstract and Bio TBD
Coming soon!

Previous Talks This Year

Oct. 12, 2022
Diyi Yang
More Civility and Positivity for Socially Responsible Language Understanding
→ Abstract and Bio Natural language processing (NLP) has had increasing success and produced extensive industrial applications. Despite being sufficient to enable these applications, current NLP systems often ignore the social part of language, e.g., who says it, in what context, for what goals, which severely limits the functionality of these applications and the growth of the field. Our research focuses on the social part of language, towards building more socially responsible language technologies. In this talk, I will take a closer look at social factors in language and share two recent works for promoting more civility and positivity in language use. The first one studies hate speech by introducing a benchmark corpus on implicit hate speech and computational models to detect and explain latent hatred in language. The second examines positive reframing by neutralizing a negative point of view and generating a more positive perspective without contradicting the original meaning.

Bio: Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research interests are computational social science and natural language processing. Her research goal is to understand the social aspects of language and to build socially aware NLP systems to better support human-human and human-computer interaction. Her work has received multiple paper awards or nominations at ACL, ICWSM, EMNLP, SIGCHI, and CSCW. She is a recipient of Forbes 30 under 30 in Science (2020), IEEE “AI 10 to Watch” (2020), the Intel Rising Star Faculty Award (2021), Microsoft Research Faculty Fellowship (2021), and NSF CAREER Award (2022).
Oct. 26, 2022
Jamie Morgenstern
Shifts in Distributions and Preferences in Response to Learning
→ Abstract and Bio Prediction systems face exogenous and endogenous distribution shift -- the world constantly changes, and the predictions the system makes change the environment in which it operates. For example, a music recommender observes exogeneous changes in the user distribution as different communities have increased access to high speed internet. If users under the age of 18 enjoy their recommendations, the proportion of the user base comprised of those under 18 may endogeneously increase. Most of the study of endogenous shifts has focused on the single decision-maker setting, where there is one learner that users either choose to use or not. In this talk, I'll describe several settings where user preferences may cause changes in distributions over the life of an ML system, and how these changes will affect the long-term performance of such systems. Joint work with Sarah Dean, Mihaela Curmei, Maryam Fazhel and Lillian Ratliff.

Bio: Jamie Morgenstern is an assistant professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She was previously an assistant professor in the School of Computer Science at Georgia Tech. Prior to starting as faculty, she was fortunate to be hosted by Michael Kearns, Aaron Roth, and Rakesh Vohra as a Warren Center fellow at the University of Pennsylvania. She completed her PhD working with Avrim Blum at Carnegie Mellon University. She studies the social impact of machine learning and the impact of social behavior on ML's guarantees. How should machine learning be made robust to behavior of the people generating training or test data for it? How should ensure that the models we design do not exacerbate inequalities already present in society?
Nov. 9, 2022
Shahar Dobzinski
On the Hardness of Dominant Strategy Mechanism Design
→ Abstract and Bio We study the communication complexity of dominant strategy implementations of combinatorial auctions. We start with two domains that are generally considered easy: multi-unit auctions with decreasing marginal values and combinatorial auctions with gross substitutes valuations. For both domains we have fast algorithms that find the welfare-maximizing allocation with communication complexity that is poly-logarithmic in the input size. This immediately implies that welfare maximization can be achieved in an ex-post equilibrium with no significant communication cost, by using VCG payments. In contrast, we show that in both domains the communication complexity of any dominant strategy implementation that achieves the optimal welfare is polynomial in the input size. We then study the approximation ratios achievable by dominant strategy mechanisms. For combinatorial auctions with general valuations, we show that no dominant-strategy mechanism achieves an approximation ratio of m^(1−eps), where m is the number of items. In contrast, a randomized dominant strategy mechanism that achieves an O(sqrt m) approximation. This proves the first gap between computationally efficient deterministic dominant strategy mechanisms and randomized ones. Joint work with Shiri Ron and Jan Vondrak.

Bio: Shahar Dobzinski is a faculty member of Weizmann's applied math and computer science department. His general research area is algorithmic game theory, an area on the intersection of computer science, game theory, and economics. Specifically, he usually studies the theory of computer science aspects of problems in algorithmic mechanism design. He is also interested in other related areas of theoretical computer science, such as approximation algorithms.
Nov. 16, 2022
Nynke Niezink
Networks in 3D: Inference for Three-way Social Networks
→ Abstract and Bio Most statistical methods for social network analysis are developed for social structures that consist of directed or undirected ties between two actors. Yet, in many social contexts, relations inherently involve three actors. For example, different people are known to perceive and cognitively represent the networks they are embedded in differently. Understanding differences in perceptions, and how perceptions drive behavior, requires an explicit model of network perception (sender-receiver-peceiver) data. Gossip networks too require a three-way network perspective. In this talk, I will introduce statistical models and inference for static and dynamic three-way network analysis based on incomplete observations.

Bio: Nynke Niezink is an assistant professor in the Department of Statistics and Data Science at Carnegie Mellon University. Her research focuses on developing statistical methodology and software for the social sciences, with an emphasis on social network analysis. Much of her work is inspired by interdisciplinary collaborations. She received her PhD in Sociology, her MSc’s in Applied Mathematics and Social and Behavioral Sciences, and her BSc’s in Mathematics and Pedagogical and Educational Sciences from the University of Groningen. Her work has been supported by the NSF, NIH, and the Russell Sage and Richard King Mellon Foundation. She was granted a Provost Inclusive Teaching Fellowship for her project targeting diversity, equity, and inclusion in Statistics education at CMU.
Dec. 7, 2022
Thodoris Lykouris
Decentralized multi-agent learning in queuing systems
→ Abstract and Bio Learning in multi-agent systems often poses significant challenges due to interference between agents. In particular, unlike classical stochastic systems, the performance of an agent's action is not drawn i.i.d. from some distribution but is directly affected by the (unobserved) actions of the other agents. This is the reason why most collaborative multi-agent learning approaches aim to globally coordinate all agents' actions to evade this interference. In this talk, we focus on bipartite queuing networks, a common model for two-sided platforms, where N agents request service from K servers. Prior decentralized multi-agent learning approaches have the aforementioned 'global coordination' flavor and therefore suffer from significant shortcomings: they are restricted to symmetric systems, have performance that degrades exponentially in the number of servers, require communication through shared randomness and unique identifiers, and are computationally demanding. In contrast, we provide a simple learning algorithm that, when run decentrally by each agent, avoids the shortcomings of 'global coordination' and leads to efficient performance in general asymmetric bipartite queuing networks while also having additional robustness properties. Along the way, we provide the first UCB-based algorithm for the centralized case of the problem, which resolves an open question by Krishnasamy, Sen, Johari, and Shakkottai (NeurIPS'16 / OR'21). The paper on which this talk is based is joint work with Daniel Freund and Wentao Weng and can be found here: https://arxiv.org/abs/2206.03324. A preliminary version appeared at COLT'22 and Wentao was selected as a finalist in the Applied Probability Society student paper competition for this work.

Bio: Thodoris Lykouris is an assistant professor at the MIT Sloan School of Management, affiliated with the Operations Management group and the Operations Research Center. His research focuses on data-driven sequential decision-making and spans across the areas of machine learning, dynamic optimization, and economics. Before joining MIT, he received his Ph.D. from Cornell University and spent two years at Microsoft Research NYC as a postdoctoral researcher. He is the recipient of a Google Ph.D. Fellowship and a Cornell University Fellowship and was selected as a finalist in the Dantzig Dissertation award as well as the George Nicholson and Applied Probability Society student paper competitions.
Jan. 11, 2023
Tina Eliassi-Rad
The Why, How, and When of Representations for Complex Systems, and their Implications for Machine Learning
→ Abstract and Bio The theme of the 2021 Nobel Prize in Physics was the study of complex systems. At the most basic level, complex systems consist of units and their interactions. In this talk, I will describe each step of a data analysis pipeline suitable for the study of complex systems: from the system dependencies that can manifest themselves in different flavors (temporal, subset, and spatial) to the common mathematical representations (such as graphs, simplicial complexes, and hypergraphs), their underlying assumptions, and the dependencies they encode. I will discuss the mathematical relationships between representations and explain how information can be lost (or imputed) when we convert data from one representation to another. I will use examples to highlight the importance of dependencies and careful choice of representations and algorithms when studying complex systems. The main message of the talk is that there is no perfect way to analyze a complex system, and that modeling decisions made when examining a data set from one system are not necessarily transferable to another system, or even to another data set from the same system. Yet, I see many studies apply certain pipelines for seemingly no other reason than because they are common in a particular field. Instead, I recommend evaluating and studying each new complex system and dataset individually and questioning each assumption and modeling decision. This talk is based on the following paper: Leo Torres, Ann Sizemore Blevins, Danielle S. Bassett, Tina Eliassi-Rad: The Why, How, and When of Representations for Complex Systems. SIAM Review 63(3): 435-485 (2021), https://doi.org/10.1137/20M1355896. Time permitting, I will describe work on measuring the distance between two graphs by relying on the theory of the length spectrum function from algebraic topology and its relationship to the non-backtracking cycles of a graph. This work is joint with Leo Torres and Pablo Suarez-Serrato, and was published in the Journal of Applied Network Science in June 2019 (http://eliassi.org/papers/appliednetsci19_nbd.pdf).

Bio: Tina Eliassi-Rad is a professor of computer science at Northeastern University. She is also a core faculty member at Northeastern's Network Science Institute and the Institute for Experiential AI. In addition, she is an external faculty member at the Santa Fe Institute and the Vermont Complex Systems Center. Prior to joining Northeastern, Tina was an Associate Professor of Computer Science at Rutgers University; and before that she was a member of technical staff and principal investigator at Lawrence Livermore National Laboratory. Tina earned her Ph.D. in Computer Sciences (with a minor in Mathematical Statistics) at the University of Wisconsin-Madison. Her research is at the intersection of data mining, machine learning, and network science. She has over 100 peer-reviewed publications (including a few best paper and best paper runner-up awards); and has given over 250 invited talks and 14 tutorials. Tina's work has been applied to personalized search on the World-Wide Web, statistical indices of large-scale scientific simulation data, fraud detection, mobile ad targeting, cyber situational awareness, drug discovery, democracy and online discourse, and ethics in machine learning. Her algorithms have been incorporated into systems used by governments and industry (e.g., IBM System G Graph Analytics), as well as open-source software (e.g., Stanford Network Analysis Project). In 2017, Tina served as the program co-chair for the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (a.k.a. KDD, which is the premier conference on data mining) and as the program co-chair for the International Conference on Network Science (a.k.a. NetSci, which is the premier conference on network science). In 2020, she served as the program co-chair for the International Conference on Computational Social Science (a.k.a. IC2S2, which is the premier conference on computational social science). Tina received an Outstanding Mentor Award from the U.S. Department of Energy's Office of Science in 2010, became an ISI Foundation Fellow in 2019, was named one of the 100 Brilliant Women in AI Ethics in 2021, and received Northeastern University's Excellence in Research and Creative Activity Award in 2022.
Jan. 25, 2023
Sigal Oren
Algorithmic Game Theory Meets Behavioral Economics
→ Abstract and Bio A recent line of work brings together Algorithmic Game Theory and Behavioral Economics. One approach taken in this line of work is to enrich well-studied settings in Algorithmic Game Theory by considering players that exhibit cognitive biases. In this talk, we will discuss two papers demonstrating this approach in two very different settings. The first paper extends the analysis of classic optimal stopping problems by considering agents that have loss aversion relative to a changing reference point and analyzes their behavior. The second paper relies on the literature on lying in behavioral economics and psychology to tackle one of the foundations of Algorithmic Mechanism Design: truthfulness. Based on joint works with Shahar Dobzinski, Bobby Kleinberg and Jon Kleinberg

Bio: Sigal Oren is an associate professor at the Computer Science department of Ben-Gurion University. She is currently on Sabbatical at Stanford as a Koret fellow. Sigal's research area is algorithmic game theory. She is currently interested in combining behavioral economics and algorithmic game theory.
Feb. 8, 2023
Dan Larremore
Quantifying hierarchy and dynamics in U.S. faculty hiring and retention
→ Abstract and Bio Faculty hiring and retention determine the composition of the U.S. academic workforce and directly shape educational outcomes, career trajectories, the development and spread of ideas, and research priorities. But patterns in faculty hiring and retention are dynamic, reflecting societal and academic priorities, generational turnover, and long-term efforts to diversify the professoriate along gender, racial, and socioeconomic lines. In this talk, we'll analyze, at unprecedented scale and resolution, the academic employment and doctoral education of tenure-track faculty at all PhD-granting U.S. universities over the decade spanning 2011-2020. Focusing on the networks formed when departments hire each other's graduates as faculty, we'll explore the mechanisms shaping these networks as well as the processes of the academic ecosystem that are shaped by them.

Bio: Daniel Larremore is an assistant professor in the Department of Computer Science and the BioFrontiers Institute. His research develops statistical and inferential methods for analyzing large-scale network data, and uses those methods to solve applied problems in diverse domains, including public health and academic labor markets. In particular, his work focuses on generative models for networks, the ongoing evolution of the malaria parasite and the origins of social inequalities in academic hiring and careers. Prior to joining the CU Boulder faculty, he was an Omidyar Fellow at the Santa Fe Institute (2015-2017) and a post-doctoral fellow at the Harvard T.H. Chan School of Public Health (2012-2015). He obtained his PhD in applied mathematics from CU Boulder in 2012, and holds an undergraduate degree from Washington University in St. Louis.
Feb. 15, 2023
Rad Niazadeh
When Matching Meets Batching: Optimal Multi-stage Algorithms and Applications
→ Abstract and Bio In several applications of real-time matching of demand to supply in online marketplaces --- for example matching delivery requests to dispatching centers in Amazon or allocating video-ads to users in YouTube --- the platform allows for some latency (or there is an inherent allowance for latency) in order to batch the demand and improve the efficiency of the resulting matching. Motivated by these scenarios, I investigate the optimal trade-off between batching and inefficiency in the context of designing robust online allocations in this talk. In particular, I consider K-stage variants of the classic vertex weighted bipartite b-matching and AdWords problems in the adversarial setting, where online vertices arrive stage-wise and in K batches—in contrast to online arrival. Our main result for both problems is an optimal (1-(1-1/K)^K)-competitive competitive (fractional) matching algorithm, improving the classic (1 − 1/e) competitive ratio bound known for the online variants of these problems (Mehta et al., 2007; Aggarwal et al., 2011). Our main technique at high-level is developing algorithmic tools to vary the trade-off between “greedyness” and “hedging” of the matching algorithm across stages. We rely on a particular family of convex-programming based matchings that distribute the demand in a specifically balanced way among supply in different stages, while carefully modifying the balancedness of the resulting matching across stages. More precisely, we identify a sequence of polynomials with decreasing degrees to be used as strictly concave regularizers of the maximum weight matching linear program to form these convex programs. At each stage, our fractional multi-stage algorithm returns the corresponding regularized optimal solution as the matching of this stage (by solving the convex program). By providing structural decomposition of the underlying graph using the optimal solutions of these convex programs and recursively connecting the regularizers together, we develop a new multi-stage primal-dual framework to analyze the competitive ratio of this algorithm. We extend our results to integral allocations in the vertex weighted b-matching problem with large budgets, and in the AdWords problem with small bid over budget ratios. I will also briefly mention a recent extension of these results to the multi-stage configuration allocation problem and its applications to video-ads. The talk is based on a series of work with Yiding Feng and Amin Saberi: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3689448 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3613755

Bio: Rad Niazadeh is an Assistant Professor of Operations Management at The University of Chicago Booth School of Business. He is also part of the faculty at Toyota Technological Institute of Chicago (TTIC) by a courtesy appointment. Prior to joining Chicago Booth, he was a visiting researcher at Google Research NYC and a postdoctoral fellow at Stanford University, Computer Science Department. He finished his PhD in Computer Science (minored in Applied Mathematics) at Cornell University. Rad primarily studies the interplay between algorithms, incentives, and learning in real-time operations of online marketplaces and e-commerce platforms. His research aims to build theoretical methodologies and generic frameworks to design faster and economically efficient market algorithms, and also to help with addressing humanitarian needs (such as equity, fairness, and non-discrimination) in operations of governmental agencies and non-profit organizations. https://faculty.chicagobooth.edu/rad-niazadeh
Mar. 15, 2023
Nika Haghtalab
Multi-objective learning: A unifying framework for robustness, fairness, and collaboration
→ Abstract and Bio Social and real-world considerations such as robustness, fairness, social welfare, and multi-agent tradeoffs have given rise to multi-objective learning paradigms. In recent years, these paradigms have been studied by several disconnected communities and under different names, including collaborative learning, distributional robustness, group fairness, and fair federated learning. In this talk, I will highlight the importance of multi-objective learning paradigms in general, introduce technical tools for addressing them from a simple unifying perspective, and discuss how these problems relate to classical and modern consideration in data-driven processes.

Bio: Nika Haghtalab is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She works broadly on the theoretical aspects of machine learning and algorithmic economics. Prof. Haghtalab's work builds theoretical foundations for ensuring both the performance of learning algorithms in presence of everyday economic forces and the integrity of social and economic forces that are born out of the use of machine learning systems. Previously, Prof. Haghtalab was an Assistant Professor in the CS department of Cornell University, in 2019-2020. She received her Ph.D. from the Computer Science Department of Carnegie Mellon University. She is a co-founder of Learning Theory Alliance (LeT-All). Among her honors are the CMU School of Computer Science Dissertation Award, SIGecom Dissertation Honorable Mention, and NeurIPS outstanding paper award.

Previous talks can be found here.

About The Seminar

Seminar Organizers: Itai Ashlagi, Mohak Goyal.

Faculty Involved: Itai Ashlagi, Ashish Goel, Ramesh Johari, Amin Saberi, Aaron Sidford, Johan Ugander, Irene Lo, Ellen Vitercik.

Note for Speakers: The talk is 55 minutes including questions (as we often start a couple of minutes late). If you are giving a talk at RAIN, please plan a 45-50 minute talk since the audience usually ask a lot of questions. Also, the audience is fairly knowledgeable, so speakers should not feel obligated to provide basic game-theoretic, algorithmic, societal, industrial, probabilistic, or statistical background.

Website template from the Stanford MLSys Seminar Series.