Stanford RAIN (Research on Algorithms and Incentives in Networks) Seminar

The Internet is a complex network made of both machines and people, and hence, problems in this domain often require techniques from both algorithms and the social sciences. The RAIN (Research on Algorithms and Incentives in Networks) seminar, supported by SOAL, provides a gathering place for talks and social discussion in this area.

Upcoming Talks

Dec. 2, 2024
Azarakhsh Malekian
TBD
→ Abstract and Bio TBD

Bio: TBD
Dec. 9, 2024
Daniel Fruend
TBD
→ Abstract and Bio TBD

Bio: TBD

Previous Talks This Year

Sept. 27, 2024
Jon Kleinberg
Language Generation in the Limit
→ Abstract and Bio Although current large language models are complex, the most basic specifications of the underlying language generation problem itself are simple to state: given a finite set of training samples from an unknown language, produce valid new strings from the language that don't already appear in the training data. Here we ask what we can conclude about language generation using only this specification, without further properties or distributional assumptions. In particular, we consider models in which an adversary enumerates the strings of an unknown target language that is known only to come from one of a possibly infinite list of candidates, and the goal is to generate new strings from this target language; we show that it is possible to give certain non-trivial guarantees for language generation in this setting. The resulting guarantees contrast dramatically with negative results due to Gold and Angluin in a well-studied model of language learning where the goal is to identify an unknown language from samples; the difference between these results suggests that identifying a language is a fundamentally different problem than generating from it. (This is joint work with Sendhil Mullainathan.)

Bio: Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on the interaction of algorithms and networks, the roles they play in large-scale social and information systems, and their broader societal implications. He is a member of the National Academy of Sciences, the National Academy of Engineering, the American Academy of Arts and Sciences, and the American Philosophical Society, and he serves on the US National AI Advisory Committee. He has received MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well as awards including the Harvey Prize, the Lanchester Prize, the Nevanlinna Prize, the World Laureates Association Prize, and the ACM Prize in Computing.
Oct. 7, 2024
Vinay Rao
Deployment Safeguards for AI Models
→ Abstract and Bio

Bio: Vinay Rao leads Anthropic's Trust and Safety team, which develops and enforces policies governing Claude's real-world applications. His team focuses on mitigating catastrophic risks as well as critical risks such as election interference, child safety, and misinformation. They also play a key role in securing Anthropic's models to comply with the company's Responsible Scaling Policy for safe and ethical AI development and deployment. With nearly two decades of experience in trust and safety, Vinay has built and managed leading Trust and Safety functions at major tech companies, including YouTube, Google, Airbnb, and Stripe.
Oct. 28, 2024
Amy Greenwald
Solving Games Forwards and Backwards
→ Abstract and Bio Computing equilibria in games is a problem of great interest in both economics and computer science. We present a min-max formulation of this problem, in which the minimizer seeks an equilibrium solution for the game, while the maximizer seeks to find fault with the proposed solutions. We call this problem the 'forward' problem. In the 'backwards' problem, we are instead given an equilibrium and a parameterized game, and we are interested in inferring a game from that equilibrium: i.e., identifying the game parameters that induce the observed equilibrium. We consider the backwards problem under weaker and weaker assumptions, ranging from inverse multiagent planning to inverse multiagent learning and beyond, for each of which we present min-max formulations. While it can be difficult to ensure that the min-max formulation of a forward problem is convex-concave, and thus amenable to standard solutions (e.g., gradient descent ascent), we find it easier to ensure that the min-max formulation of a backward problem is convex-concave. We apply our backwards method to Spanish electricity market time-series data, and then push the inferred game forward to predict future market prices. Joint work with Denizalp Goktas and Sadie Zhao.

Bio: Amy Greenwald is a Professor of Computer Science at Brown University in Providence, Rhode Island. Her research focuses on game-theoretic and economic interactions among computational agents with applications to automated bidding and negotiation in domains ranging from advertising auctions to supply chains. She is also active in promoting diversity in Computer Science, leading multiple K-12 initiatives in the Providence public schools.
Nov. 4, 2024
Weijie Zhong
Rank-Guaranteed Auctions
→ Abstract and Bio We propose a combinatorial ascending auction that is 'approximately' optimal, requiring minimal rationality to achieve this level of optimality, and is robust to strategic and distributional uncertainties. Specifically, the auction is rank-guaranteed, meaning that for any menu M and any valuation profile, the ex-post revenue is guaranteed to be at least as high as the highest revenue achievable from feasible allocations, taking the (|M| + 1)th-highest valuation for each bundle as the price. Our analysis highlights a crucial aspect of combinatorial auction design, namely, the design of menus. We provide simple and approximately optimal menus in various settings.

Bio: Weijie Zhong is an Assistant Professor of Economics at the Stanford Graduate School of Business. He specializes in microeconomic theory, focusing on how information influences individual behavior and market outcomes. His recent research delves into the optimal ways to dynamically acquire information to aid decision-making.
Nov. 11, 2024
Pengyu Qian
Incentivizing Resource Pooling
→ Abstract and Bio Resource pooling improves system efficiency drastically in large stochastic systems, but its effective implementation in decentralized systems remains relatively underexplored. This paper studies how to incentivize resource pooling when agents are self-interested, and their states are private information. Our primary motivation is applications in the design of decentralized computing markets, among others. We study a standard multi-server queueing model in which each server is associated with an M/M/1 queue and aims to minimize its time-average job holding and processing costs. We design a simple token-based mechanism where servers can earn tokens by offering help and spend tokens to request help from other servers, all in their self-interest. The mechanism induces a complex game among servers. We employ the fluid mean-field equilibrium (FMFE) concept to analyze the system, combining mean-field approximation with fluid relaxation. This framework enables us to derive a closed-form characterization of servers' FMFE strategies. We show that these FMFE strategies approximate well the servers' rational behavior. We leverage this framework to optimize the design of the mechanism and present our main results: As the number of servers increases, the proposed mechanism incentivizes complete resource pooling---that is, the system dynamics and performance under our mechanism match those under centralized control.

Bio: Pengyu Qian is an Assistant Professor in the Operations & Technology Management department at the Questrom School of Business, Boston University. His research studies the design and analysis of marketplaces in dynamic settings, using tools from probability, optimization and game theory. He is interested in foundational models driven by challenges in sharing economy and the allocation of public resources. His research emphasizes algorithms/mechanisms that not only have good theoretical guarantees, but also are simple, robust, and hence practical for real-world systems. Pengyu is a recipient of the INFORMS JFIG Best Paper Prize. He earned his Ph.D. from Columbia Business School and his B.S. from Peking University.
Nov. 18, 2024
Noam Brown
Learning to Reason with LLMs
→ Abstract and Bio Large language models (LLMs) have demonstrated remarkable capabilities in generating coherent text and completing various natural language tasks. Nevertheless, their ability to perform complex, general reasoning has remained limited. In this talk, I will describe OpenAI's new o1 model, an LLM trained via reinforcement learning to generate a hidden chain of thought before its response. We have found that the performance of o1 consistently improves with more reinforcement learning compute and with more inference compute. o1 surpasses previous state-of-the-art models in a variety of benchmarks that require reasoning, including mathematics competitions, programming contests, and advanced science question sets. I will discuss the implications of scaling this paradigm even further.

Bio: Noam Brown is a research scientist at OpenAI investigating reasoning and multi-agent AI. He co-created Libratus and Pluribus, the first AIs to defeat top humans in two-player no-limit poker and multiplayer no-limit poker, respectively, and Cicero, the first AI to achieve human-level performance in the natural language strategy game Diplomacy. He has received the Marvin Minsky Medal for Outstanding Achievements in AI, was named one of MIT Tech Review's 35 Innovators Under 35, and his work on Pluribus was named by Science as one of the top 10 scientific breakthroughs of 2019. Noam received his PhD from Carnegie Mellon University.

Previous talks can be found here.

About The Seminar

Seminar Organizers: Itai Ashlagi, Mohak Goyal, Tristan Pollner.

Faculty Involved: Itai Ashlagi, Ashish Goel, Ramesh Johari, Amin Saberi, Aaron Sidford, Johan Ugander, Irene Lo, Ellen Vitercik.

Note for Speakers: The talk is 55 minutes including questions (as we often start a couple of minutes late). If you are giving a talk at RAIN, please plan a 45-50 minute talk since the audience usually ask a lot of questions. Also, the audience is fairly knowledgeable, so speakers should not feel obligated to provide basic game-theoretic, algorithmic, societal, industrial, probabilistic, or statistical background.

Website template from the Stanford MLSys Seminar Series.