Stanford RAIN (Research on Algorithms and Incentives in Networks) Seminar
The Internet is a complex network made of both machines and people, and hence, problems in this domain often require techniques from both algorithms and the social sciences. The RAIN (Research on Algorithms and Incentives in Networks) seminar, supported by SOAL, provides a gathering place for talks and social discussion in this area.
- Our talks are on Mondays from 4-5 PM PT in Y2E2 299!
- To receive updates on upcoming talks, please join our email list!
Upcoming Talks
→ Abstract and Bio
We study a foundational model of dynamic matching market with abandonment. This model has been studied by Collina et al (2020) and Aouad and Saritac (2022), and many other papers have considered special cases. We compare the performance of greedy policies -- which identify a set of "acceptable" matches up front, and perform these matches as soon as possible -- to that of an omniscient benchmark which knows the full arrival and departure sequence.Bio: Nick Arnosti is an Assistant Professor in the Department of Industrial and Systems Engineering (ISyE) at the University of Minnesota. He earned his Ph.D. in Operations Research from Stanford University in 2016, advised by Ramesh Johari and Paul Milgrom. Arnosti’s research centers on market-design questions for allocating social goods—such as public good and school seats—with a particular emphasis on the lotteries and waitlists used to distribute affordable-housing units.
Previous Talks This Year
→ Abstract and Bio
We introduce a framework for comparing the privacy of different dynamic mechanisms. A mechanism designer employs a dynamic protocol to elicit agents’ private information. Protocols produce a set of contextual privacy violations—information learned about agents that may be superfluous given the context. A protocol is maximally contextually private if there is no protocol that produces a subset of the violations it produces, while still implementing the choice rule. We show that selecting a maximally contextually private protocol involves a deliberate decision about whose privacy is most important to protect, and these protocols delay questions to those they aim to protect. Taking the second-price auction rule as an instructive example, we derive two novel designs that are maximally contextually private: the ascending-join and overdescending-join protocols.Bio: Zoë Hitzig is a Junior Fellow at the Harvard Society of Fellows (on leave) and a Research Scientist at OpenAI.
→ Abstract and Bio
Recommendation systems are now used in high-stakes settings, including to help find jobs, schools, and partners. Building public-interest recommender systems in such settings brings both individual-level (enabling exploration, diversity, data quality) and societal (fairness, capacity constraints, algorithmic monoculture) challenges. In this talk, I'll discuss our theoretical, empirical, and deployment work in tackling these challenges, including ongoing work on (a) applicant behavior and recommendations for the NYC HS match, (b) a platform to help discharge patients to long-term-care facilities, and (c) feed-ranking algorithms on Bluesky for research-paper recommendations.Bio: Nikhil Garg is an assistant professor of Operations Research and Information Engineering (ORIE) at Cornell Tech. His work has received several awards, including NSF CAREER, the INFORMS George Dantzig Dissertation Award, the ACM SIGecom Dissertation Award (Honorable Mention), Forbes 30 Under 30 for Science, and the NSF Graduate Research Fellowship. Previously, he received an MS and PhD from Stanford in 2020, where he was advised by Prof. Ashish Goel and Prof. Ramesh Johari.
→ Abstract and Bio
The impact of artificial intelligence (AI) transcends the simple case of human-machine interactions and extends to human-human interactions in the presence of AI. Here, I explore such “hybrid systems” of humans and machines. Our large-sample experiments show how the careful yet simple programming of AI agents can enhance the performance of human groups, making people within such groups better able to cooperate, coordinate, innovate, and communicate, ultimately contributing to their superior performance. On the other hand, both simple and complex forms of AI (such as large language models) can also do the opposite, harming groups of people and our society as a whole. Our experiments show how AI agents can affect social processes and human performance in settings as diverse as people working together online or coordinating their movement on roadways. Our work, in short, does not involve the development of super-smart AI to replace human cognition, but rather “dumb AI” to supplement human interaction. These findings reveal what the disruptive introduction of AI into our lives means for the future of human social behavior. And they suggest ways to design AI — as a type of “social catalyst” — so as to make sure it supports a utopian rather than dystopian future.Bio: Nicholas A. Christakis, MD, PhD, MPH, is a social scientist and physician at Yale University who conducts research in the fields of network science, biosocial science, and computational social science. His current work focuses on how human biology and health affect, and are affected by, social interactions and social networks. Dr. Christakis directs the Human Nature Lab and is the Director of the Yale Institute for Network Science.
→ Abstract and Bio
Sortition is based on the idea of choosing randomly selected representatives for decision-making. The main properties that make sortition particularly appealing are fairness and proportional representation. When a population lies on a representation metric, we formally define proportional representation using a notion called the core. Thus, we ask: Can we design a selection algorithm that satisfies fairness and the core simultaneously? We answer this question affirmatively and present an efficient selection algorithm, called Fair Greedy Capture, that is fair and provides a constant-factor approximation to the optimal core.Bio: Evi Micha is an assistant professor in the Thomas Lord Department of Computer Science at the University of Southern California. She did her postdoc at Harvard University and obtained her Ph.D. from the University of Toronto. Her research interests lie at the intersection of computer science and economics and span areas such as algorithmic fairness and computational social choice. One of her recent papers was selected as exemplary in the applied-modeling track of the ACM Conference on Economics and Computation. Her research has been awarded the 2024 Best Dissertation Award from the Canadian AI Association and was runner-up for the 2024 IFAAMAS Victor Lesser Distinguished Dissertation Award.
→ Abstract and Bio
Bio: Cora co-founded soon.dating, a coding project with her best friend that got popular in San Francisco last year. Before soon, she was a PhD student at UC Berkeley studying mathematical logic and philosophy. While building soon, she became entrenched in the data problem along the way. The talk will explore why gathering meaningful data is the true challenge in building effective dating platforms. While the solution might seem obvious, she'll discuss why getting the right data is both more crucial and more difficult than it appears.
→ Abstract and Bio
Community Notes empowers people on X to collaboratively add context to potentially misleading posts. Community Notes have been rated as more trustworthy (bipartisanly) than simple misinformation flags, and cause users to organically like and share noted posts significantly less often. This talk will explore the algorithm behind X's Community Notes, including bridging-based ranking (elevating consensus from people who often disagree via matrix factorization) and a reputation system. We will also touch on newer ideas that aren't yet in production, e.g., generating notes that are predicted to be rated helpful by raters with a diverse set of viewpoints.Bio: Jay Baxter is a Sr. Staff Machine Learning Engineer at X and was the founding ML lead on Community Notes. He was previously a tech lead in Twitter Cortex Applied Research, focusing on user modeling and large-scale, real-time recommender systems. He holds an MEng and SB in EECS from MIT, where he built BayesDB, a probabilistic database.
→ Abstract and Bio
Although current large language models are complex, the most basic specifications of the underlying language generation problem itself are simple to state: given a finite set of training samples from an unknown language, produce valid new strings from the language that don't already appear in the training data. Here we ask what we can conclude about language generation using only this specification, without further properties or distributional assumptions. In particular, we consider models in which an adversary enumerates the strings of an unknown target language that is known only to come from one of a possibly infinite list of candidates, and the goal is to generate new strings from this target language; we show that it is possible to give certain non-trivial guarantees for language generation in this setting. The resulting guarantees contrast dramatically with negative results due to Gold and Angluin in a well-studied model of language learning where the goal is to identify an unknown language from samples; the difference between these results suggests that identifying a language is a fundamentally different problem than generating from it. (This is joint work with Sendhil Mullainathan.)Bio: Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on the interaction of algorithms and networks, the roles they play in large-scale social and information systems, and their broader societal implications. He is a member of the National Academy of Sciences, the National Academy of Engineering, the American Academy of Arts and Sciences, and the American Philosophical Society, and he serves on the U.S. National AI Advisory Committee. He has received MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well as awards including the Harvey Prize, the Lanchester Prize, the Nevanlinna Prize, the World Laureates Association Prize, and the ACM Prize in Computing.
→ Abstract and Bio
Bio: Vinay Rao leads Anthropic's Trust and Safety team, which develops and enforces policies governing Claude's real-world applications. His team focuses on mitigating catastrophic risks as well as critical risks such as election interference, child safety, and misinformation. They also play a key role in securing Anthropic's models to comply with the company's Responsible Scaling Policy for safe and ethical AI development and deployment. With nearly two decades of experience in trust and safety, Vinay has built and managed leading Trust and Safety functions at major tech companies, including YouTube, Google, Airbnb, and Stripe.
→ Abstract and Bio
Computing equilibria in games is a problem of great interest in both economics and computer science. We present a min-max formulation of this problem, in which the minimizer seeks an equilibrium solution for the game, while the maximizer seeks to find fault with the proposed solutions. We call this the "forward" problem. In the "backwards" problem, we are instead given an equilibrium and a parameterized game, and we are interested in inferring a game from that equilibrium—that is, identifying the game parameters that induce the observed equilibrium. We consider the backwards problem under increasingly weaker assumptions, ranging from inverse multi-agent planning to inverse multi-agent learning and beyond, each with corresponding min-max formulations. While it can be difficult to ensure that the min-max formulation of a forward problem is convex-concave (and thus amenable to solutions such as gradient-descent ascent), we find it easier to ensure that the min-max formulation of a backward problem is convex-concave. We apply our backwards method to Spanish electricity-market time-series data and then push the inferred game forward to predict future market prices. (Joint work with Denizalp Goktas and Sadie Zhao.)Bio: Amy Greenwald is a Professor of Computer Science at Brown University in Providence, Rhode Island. Her research focuses on game-theoretic and economic interactions among computational agents, with applications to automated bidding and negotiation in domains ranging from advertising auctions to supply chains. She is also active in promoting diversity in computer science, leading multiple K-12 initiatives in the Providence public schools.
→ Abstract and Bio
We propose a combinatorial ascending auction that is "approximately" optimal, requiring minimal rationality to achieve this level of optimality, and is robust to strategic and distributional uncertainties. Specifically, the auction is rank-guaranteed, meaning that for any menu M and any valuation profile, the ex-post revenue is guaranteed to be at least as high as the highest revenue achievable from feasible allocations, taking the (|M| + 1)th-highest valuation for each bundle as the price. Our analysis highlights a crucial aspect of combinatorial-auction design, namely, the design of menus. We provide simple and approximately optimal menus in various settings.Bio: Weijie Zhong is an Assistant Professor of Economics at the Stanford Graduate School of Business. He specializes in microeconomic theory, focusing on how information influences individual behavior and market outcomes. His recent research delves into optimal ways to dynamically acquire information to aid decision-making.
→ Abstract and Bio
Resource pooling improves system efficiency drastically in large stochastic systems, but its effective implementation in decentralized systems remains relatively underexplored. This paper studies how to incentivize resource pooling when agents are self-interested and their states are private information. Our primary motivation is applications in the design of decentralized computing markets, among others. We study a standard multi-server queueing model in which each server is associated with an M/M/1 queue and aims to minimize its time-average job holding and processing costs. We design a simple token-based mechanism where servers can earn tokens by offering help and spend tokens to request help from other servers, all in their self-interest. The mechanism induces a complex game among servers. We employ the fluid mean-field equilibrium (FMFE) concept to analyze the system, combining mean-field approximation with fluid relaxation. This framework enables us to derive a closed-form characterization of servers' FMFE strategies. We show that these FMFE strategies approximate well the servers' rational behavior. We leverage this framework to optimize the design of the mechanism and present our main results: As the number of servers increases, the proposed mechanism incentivizes complete resource pooling—namely, the system dynamics and performance under our mechanism match those under centralized control.Bio: Pengyu Qian is an Assistant Professor in the Operations & Technology Management department at the Questrom School of Business, Boston University. His research studies the design and analysis of marketplaces in dynamic settings, using tools from probability, optimization, and game theory. He is interested in foundational models driven by challenges in the sharing economy and the allocation of public resources. His research emphasizes algorithms and mechanisms that not only have good theoretical guarantees but also are simple, robust, and practical for real-world systems. Pengyu is a recipient of the INFORMS JFIG Best Paper Prize. He earned his Ph.D. from Columbia Business School and his B.S. from Peking University.
→ Abstract and Bio
Large language models (LLMs) have demonstrated remarkable capabilities in generating coherent text and completing various natural-language tasks. Nevertheless, their ability to perform complex, general reasoning has remained limited. In this talk, I will describe OpenAI's new o1 model, an LLM trained via reinforcement learning to generate a hidden chain of thought before its response. We have found that the performance of o1 consistently improves with more reinforcement-learning compute and with more inference compute. o1 surpasses previous state-of-the-art models on a variety of benchmarks that require reasoning, including mathematics competitions, programming contests, and advanced science question sets. I will discuss the implications of scaling this paradigm even further.Bio: Noam Brown is a research scientist at OpenAI investigating reasoning and multi-agent AI. He co-created Libratus and Pluribus, the first AIs to defeat top humans in two-player no-limit poker and multiplayer no-limit poker, respectively, and Cicero, the first AI to achieve human-level performance in the natural-language strategy game Diplomacy. He has received the Marvin Minsky Medal for Outstanding Achievements in AI, was named one of MIT Technology Review's 35 Innovators Under 35, and his work on Pluribus was named by *Science* as one of the top 10 scientific breakthroughs of 2019. Noam received his Ph.D. from Carnegie Mellon University.
→ Abstract and Bio
In this talk I will give an overview of my recent work at the intersection of humanitarian immigration and operations, with a particular focus on the dedicated docket program. The dedicated docket was introduced by the Biden Administration to reform the asylum system. It creates a separate queue for court proceedings in which judges are supposed to issue decisions for each case within a target time frame. Its goals are to improve speed, accuracy, and fairness. Though it meets its first goal, legal advocacy groups report that this comes at the expense of the last. Referring to it as a "denial of justice," they find that cases on the dedicated docket routinely fail to access legal representation and have a much lower asylum-grant rate. Against this backdrop, we study the operational implications of a dedicated-docket system. We develop a queueing model in which a policy-maker routes asylees to the regular or the dedicated docket and sets a delay target for the latter. Constrained by the target, the court allocates its capacity to minimize average delay, and immigration lawyers schedule their time between dockets to maximize the rate of successful asylum cases. Compared to a single docket, we show that the dedicated-docket system can Pareto-improve speed and accuracy. However, we also prove that the system satisfies two natural fairness rules only if it is dominated in speed and accuracy by the single docket. Our analysis can guide public discourse by informing both policymakers and legal advocacy groups of the necessary trade-offs involved in scheduling decisions in the asylum context. (Joint work with Wentao Weng.) A preprint of the paper is available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4785713.Bio: Daniel Freund is an Assistant Professor of Operations Management at the MIT Sloan School of Management. His research applies optimization, stochastic modeling, and revenue-management techniques to problems in transportation, online platforms, and humanitarian immigration, among others. His work has been recognized with the George B. Dantzig Dissertation Award (2018), the Daniel H. Wagner Prize (2018), and several INFORMS best-paper prizes. He frequently serves as an associate editor for *Operations Research* and *Transportation Science*, and on program committees for conferences such as EC, EAAMO, and TheWebConf.
Previous talks can be found here.
About The Seminar
Seminar Organizers: Itai Ashlagi, Mohak Goyal, Tristan Pollner.
Faculty Involved: Itai Ashlagi, Ashish Goel, Ramesh Johari, Amin Saberi, Aaron Sidford, Johan Ugander, Irene Lo, Ellen Vitercik.
Note for Speakers: The talk is 55 minutes including questions (as we often start a couple of minutes late). If you are giving a talk at RAIN, please plan a 45-50 minute talk since the audience usually ask a lot of questions. Also, the audience is fairly knowledgeable, so speakers should not feel obligated to provide basic game-theoretic, algorithmic, societal, industrial, probabilistic, or statistical background.
Website template from the Stanford MLSys Seminar Series.