Skip to content Skip to navigation

The RAIN seminar is held on Wednesdays from 12:00-1:00pm in Y2E2 101 . And yes, lunch is provided!

RAIN schedule for Winter Quarter 2016-17

Date Speaker Topic
January 25
Devavrat Shah Blind Regression, Recommendation System and Collaborative Filtering
February 8
Udi Weinsberg Data-Science-Driven Products at Facebook
February 22
Anima Anandkumar Large-scale Machine Learning: Theory and Practice
March 8
Éva Tardos Learning and Efficiency of Outcomes in Games

RAIN schedule for Spring Quarter 2016-17

Date Speaker Topic
April 12
Alireza Tahbaz-Salehi Supply Chain Disruptions: Evidence from the Great East Japan Earthquake
April 26
Aaron Roth Quantifying Tradeoffs between Fairness and Accuracy in Online Learning
May 10
Mohsen Bayati Exploiting the Natural Exploration in Online Decision-Making
May 24
Herve Moulin Competitive Division of a Mixed Manna
May 31
John Dickerson Using Optimization to Balance Fairness and Efficiency in Barter Markets

Google Calendar for RAIN


Previous year's talks

Archived talks can be accessed here.

Talk Abstracts

Blind Regression, Recommendation System and Collaborative Filtering
Devavrat Shah, MIT

We discuss the framework of Blind Regression (also known as Latent Variable Model) motivated by the problem of Matrix Completion for recommendation systems: given a collection of users and movies, the goal is to predict the unknown rating of a user for a movie using the known observations, i.e. complete the partially observed matrix. We posit that each user and movie is associated with their latent feature, and the rating of a user for a movie equals the noisy version of latent function applied to the associated latent features. Therefore, completing the matrix boils down to predicting the latent function value for user-movie pairs for which ratings are unknown, just like the classical regression setting. However, unlike the setting of regression, features are not observed here -- hence "Blind" Regression. Such a model arises as a canonical characterization due to multi-dimensional exchangeability property a la Aldous and Hoover (early 1980s).

In this talk, using inspiration from the classical Taylor's expansion for differentiable functions, we shall propose a prediction algorithm that is consistent for all Lipschitz continuous functions. We provide finite sample analysis that suggests that even when observing a vanishing fraction of the matrix, the algorithm produces accurate predictions. We discuss relationship with spectral algorithm for matrix completion, and the collaborative filtering.

The talk is based on joint works with Christina Lee, Yihua Li and Dogyoon Song (MIT).

Bio: Devavrat Shah is a Professor with the department of Electrical Engineering and Computer Science at Massachusetts Institute of Technology. His current research interests are at the interface of Statistical Inference and Social Data Processing. His work has been recognized through prize paper awards in Machine Learning, Operations Research and Computer Science, as well as career prizes including 2010 Erlang prize from the INFORMS Applied Probability Society and 2008 ACM Sigmetrics Rising Star Award. He is a distinguished young alumni of his alma mater IIT Bombay.




Data-Science-Driven Products at Facebook
Udi Weinsberg, Facebook

This talk will give an overview of a range of data-driven products that the Core Data Science group helped building, mostly by applying machine learning and statistical methods on large-scale data. We'll talk about analysis of cascades and product adoption, identifying trends in real-time, fighting scams, understanding the true meanings behind emoji, and figuring out how people laugh online.

Bio: Udi Weinsberg leads the Algorithms group in Core Data Science in Facebook. The group helps product teams across Facebook to tackle difficult product problems and deliver new features by leveraging vast amounts of data together with a range of machine learning techniques. Before Facebook, Udi was a senior researcher at Technicolor, working on privacy in machine learning using cryptographic methods.




Large-scale Machine Learning: Theory and Practice
Anima Anandkumar, Amazon and Caltech

Large-scale machine learning requires blending computational thinking with statistical frameworks. Designing fast, efficient and distributed learning algorithms with statistical guarantees is an outstanding grand challenge. I will present perspectives from theory and practice. I will demonstrate how spectral optimization can reach the globally optimal solution for many learning problems despite being non-convex. This includes unsupervised learning of latent variable models, training neural networks and reinforcement learning of partially observable Markov decision processes. In practice, tensor methods yield enormous gains both in running times and learning accuracy over traditional methods such as variational inference. I will then talk about the recent advances in large-scale deep learning methods. Our lab at AWS is actively innovating on the MXNet package. It is a highly flexible and developer-friendly open-source deep learning framework designed for both efficiency and flexibility. It is based on the distributed parameter-server framework. I will demonstrate how to use preconfigured Deep Learning AMIs and CloudFormation Templates on AWS to help speed up deep learning research and development. I will conclude on outstanding challenges on how we can bridge the gaps between theory and practice, and how we can design and analyze large-scale learning algorithms.

Bio: Anima Anandkumar is currently a principal scientist at Amazon Web Services. She will be joining Caltech CMS department in summer 2017 as a Bren endowed chair. Her research interests are in the areas of large-scale machine learning, non-convex optimization and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms. She is the recipient of several awards such as the Alfred. P. Sloan Fellowship, Microsoft Faculty Fellowship, Google research award, ARO and AFOSR Young Investigator Awards, NSF Career Award, Best Thesis Award from the ACM Sigmetrics society, IBM Fran Allen PhD fellowship, and several best paper awards. She has been featured in a number of forums such as the Quora ML session, Huffington post, Forbes, O'Reilly media, and so on. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, an assistant professor at U.C. Irvine between 2010 and 2016, and a visiting researcher at Microsoft Research New England in 2012 and 2014.




Learning and Efficiency of Outcomes in Games
Eva Tardos, Department of Computer Science, Cornell University

Selfish behavior can often lead to suboptimal outcome for all participants, a phenomenon illustrated by many classical examples in game theory. Over the last decade we developed good understanding on how to quantify the impact of strategic user behavior on the overall performance in many games (including traffic routing as well as online auctions). In this talk we will focus on games where players use a form of learning that helps them adapt to the environment, and consider two closely related questions: What are broad classes of learning behaviors that guarantee that game outcomes converge to the quality guaranteed by the price of anarchy, and how fast is this convergence. Or asking these questions more broadly: what learning guarantees high social welfare in games, when the game or the population of players is dynamically changing.



Bio: Eva Tardos is a Jacob Gould Schurman Professor of Computer Science at Cornell University, was Computer Science department chair 2006-2010. She received her BA and PhD from Eotvos University in Budapest. She joined the faculty at Cornell in 1989. Tardos's research interest is algorithms and algorithmic game theory. She is most known for her work on network-flow algorithms, approximation algorithms, and quantifying the efficiency of selfish routing. She has been elected to the National Academy of Engineering, the National Academy of Sciences, the American Academy of Arts and Sciences, and is an external member of the Hungarian Academy of Sciences. She is the recipient of a number of fellowships and awards including the Packard Fellowship, the Goedel Prize, Dantzig Prize, Fulkerson Prize, and the IEEE Technical Achievement Award. She is editor editor-in-Chief of the Journal of the ACM, and was editor in the past of several other journals including the SIAM Journal of Computing, and Combinatorica, served as problem committee member for many conferences, and was program committee chair for SODA'96, FOCS'05, and EC'13.




Supply Chain Disruptions: Evidence from the Great East Japan Earthquake
Alireza Tahbaz-Salehi, Columbia

Exploiting the exogenous and regional nature of the Great East Japan Earthquake of 2011, this paper provides a systematic quantification of the role of input-output linkages as a mechanism for the propagation and amplification of shocks. We document that the disruption caused by the earthquake and its aftermaths propagated upstream and downstream supply chains, affecting the direct and indirect suppliers and customers of disaster-stricken firms. We then use our empirical findings to obtain an estimate for the overall macroeconomic impact of the shock by taking these propagation effects into account. We find that the propagation of the shock over input-output linkages can account for a 1.2 percentage point decline in Japan's gross output in the year following the earthquake. We interpret these findings in the context of a general equilibrium model that takes the firm-to-firm linkages into account explicitly. Link




Quantifying Tradeoffs between Fairness and Accuracy in Online Learning
Aaron Roth, Department of Computer Science, University of Pennsylvania

In this talk, I will discuss our recent efforts to formalize a particular notion of "fairness" in online decision making problems, and study its costs on the achievable learning rate of the algorithm. Our focus for most of the talk will be on the "contextual bandit" problem, which models the following scenario. Every day, applicants from different populations submit loan applications to a lender, who must select a subset of them to give loans to. Each population has a (potentially different) mapping from applications to credit-worthiness that is initially unknown to the lender. The fairness constraint we impose roughly translates to: "Less credit worthy individuals should never be preferentially favored over more credit worthy individuals, regardless of group membership". Despite the fact that this constraint seems consistent with the profit motivation of the bank, we show that imposing a fairness constraint provably slows down learning --- sometimes only mildly, but sometimes substantially, depending on the structure of the problem. Time permitting, we will mention recent extensions to the reinforcement learning setting in which the actions of the learner can affect its environment, and to economic settings in which the learner must be incentivized by a principal to act fairly.

Joint Work with Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Seth Neel

Bio: Aaron Roth is the class of 1940 Bicentennial Term associate professor of Computer and Information Sciences at the University of Pennsylvania, affiliated with the Warren Center for Network and Data Science, and co-director of the Networked and Social Systems Engineering (NETS) program. Previously, he received his PhD from Carnegie Mellon University and spent a year as a postdoctoral researcher at Microsoft Research New England. He is the recipient of a Presidential Early Career Award for Scientists and Engineers (PECASE) awarded by President Obama in 2016, an Alfred P. Sloan Research Fellowship, an NSF CAREER award, and a Yahoo! ACE award. His research focuses on the algorithmic foundations of data privacy, algorithmic fairness, game theory and mechanism design, learning theory, and the intersections of these topics. Together with Cynthia Dwork, he is the author of the book "The Algorithmic Foundations of Differential Privacy."




Exploiting the Natural Exploration in Online Decision-Making
Mohsen Bayati, Graduate School of Business, Stanford University

Growing availability of data has enabled practitioners to tailor decisions at the individual level. This involves learning a model of decision outcomes conditional on individual-specific covariates (contexts). Recently, contextual bandits have been introduced as a framework to study these online and sequential decision making problems. This literature predominantly focuses on algorithms that balance an exploration-exploitation tradeoff, since greedy policies that exploit current estimates without any exploration may be sub-optimal in general. However, exploration-free greedy policies are desirable in many practical settings where experimentation may be prohibitively costly or unethical (e.g. clinical trials).

In this talk we show that, for a general class of context distributions, the greedy policy benefits from a natural exploration obtained from the varying contexts and becomes asymptotically optimal under some assumptions on problem parameters. Motivated by these results, we introduce Greedy-First, a new algorithm that uses only observed contexts and rewards to determine whether to follow a greedy policy or to explore. We prove that this algorithm is asymptotically optimal without any additional assumptions. Through simulations we demonstrate that Greedy-First successfully reduces experimentation and outperforms existing (exploration-based) algorithms.

Joint work with Hamsa Bastani and Khashayar Khosravi




Competitive Division of a Mixed Manna
Herve Moulin, University of Glasgow

A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homogeneous of degree one, concave (and monotone), the competitive division maximizes the Nash product of utilities (Gale-Eisenberg): hence it is welfarist (determined by the set of feasible utility profiles), unique, continuous and easy to compute. We show that the competitive division of a mixed manna is still welfarist. If the zero utility profile is Pareto dominated, the competitive profile is strictly positive and still uniquely maximizes the product of utilities. If the zero profile is unfeasible (for instance if all items are bads), the competitive profiles are strictly negative, and are the critical points of the product of disutilities on the efficiency frontier. The latter allows for multiple competitive utility profiles, from which no single-valued selection can be continuous or Resource Monotonic. Thus the implementation of competitive fairness under linear preferences in interactive platforms like SPLIDDIT will be more difficult when the manna contains bads that overwhelm the goods.

Bio: Herve Moulin graduated in 1971 from the Ecole Normale Superieure in Paris, and received his Ph.D. in mathematics from the Universite de Paris in 1975. He has taught at the Ecole Nationale de la Statistique et Administration Economique, University of Paris at Dauphine, Virginia Polytechnic Institute and State University, Duke University and Rice University, and currently at the University of Glasgow. He is a Fellow of the Econometric Society since 1983. His work has contributed to redefining the field of 'normative economics', poised to invent new resource allocation mechanisms - or refine existing ones - in a variety of contexts: voting rules; the fair division of assets (as in a divorce or inheritance); rationing of over-demanded commodities (such as organs for transplant or seats for a popular event); the exploitation of a "commons"; the assignment of tasks between workers; the scheduling of jobs in a queue; sharing the cost and pricing the traffic of a communication network, etc.




Using Optimization to Balance Fairness and Efficiency in Barter Markets
John Dickerson, University of Maryland

The exchange of indivisible goods without money addresses a variety of constrained economic settings where a medium of exchange - such as money - is considered inappropriate. Participants are either matched directly with another participant or, in more complex domains, in barter cycles and chains with other participants before exchanging their endowed goods. We show that techniques from computer science and operations research, combined with the recent availability of massive data and inexpensive computing, can guide the design of such matching markets and enable the markets by running them in the real world. A key application domain for our work is kidney exchange, an organized market where patients with end-stage renal failure swap willing but incompatible donors. We present new models that address three fundamental dimensions of kidney exchange: (i) uncertainty over the existence of possible trades, (ii) balancing efficiency and fairness, and (iii) inherent dynamism. Next, we combine these dimensions, along with high-level human-provided guidance, into a unified framework for learning to match in a general dynamic setting. This framework, which we coin FutureMatch, takes as input a high-level objective (e.g., "maximize graft survival of transplants over time") decided on by experts, then automatically learns based on data how to make this objective concrete and learns the "means" to accomplish this goal - a task that, in our experience, humans handle poorly. Throughout, we draw on insights from our work with the United Network for Organ Sharing (UNOS) US-wide exchange and experiments on data from the National Health Service UK-wide exchange.

Bio: John P. Dickerson is an Assistant Professor of Computer Science at the University of Maryland, and a recent CS PhD graduate of Carnegie Mellon University. His research centers on solving practical economic problems using techniques from computer science, stochastic optimization, and machine learning. He has worked extensively on theoretical and empirical approaches to kidney exchange, where his work has set policy at the UNOS nationwide exchange; game-theoretic approaches to counter-terrorism and negotiation, where his models have been deployed; and computational advertising through Optimized Markets, a CMU spin-off company. He created FutureMatch, a general framework for learning to match subject to human value judgments; that framework won a 2014 HPCWire Supercomputing Award. Prior to his Ph.D., he worked at IBM and in R&D at a defense agency. He is an NDSEG Fellow, Facebook Fellow, and Siebel Scholar.