Skip to content Skip to navigation

The RAIN seminar is held on Wednesdays from 12:00-1:00pm in Y2E2 101 . And yes, lunch is provided!

RAIN schedule for Spring Quarter 2017-18

Date Speaker Topic
April 18
Sid Banerjee Allocating Resources, in the Future
April 25
Avi Feller The role of the propensity score in the synthetic control method
May 9
C. Seshadhri Algorithmically exploiting structure or: how I learned to stop worrying and enjoy real-world graphs
May 23
Miklos Racz High-dimensional random geometric graphs
June 6
Richard Cole Dynamics of Distributed Updating in Fisher Markets

Google Calendar for RAIN


Previous year's talks

Archived talks can be accessed here.

Talk Abstracts

Allocating Resources, in the Future
Sid Banerjee, Cornell

Abstract: The online allocation of scarce resources is a canonical paradigm of decision-making under uncertainty, and is widely studied in many fields of engineering under a variety of titles (dynamic allocation/revenue management/prophet inequalities/etc.). In this talk, I will re-examine the basic online allocation problem, with the aim of building bridges to the ever-improving predictive capabilities enabled by modern machine-learning methods. To this end, I will present a new Bayesian-learning inspired algorithm for online allocation problems, and show that it achieves the first horizon-independent and budget-independent regret bounds for online packing with stochastic inputs. Surprisingly, the result stems from elementary tools - LP sensitivity and basic concentration of measures.

Bio: Sid Banerjee is an assistant professor in the School of Operations Research and Information Engineering (ORIE) at Cornell. His research is on stochastic modeling, and the design of algorithms and incentives for large-scale systems. He got his PhD in ECE from UT Austin, following which he was a postdoctoral researcher in the Social Algorithms Lab at Stanford, as well as a technical consultant at Lyft.




The role of the propensity score in the synthetic control method
Avi Feller, UC Berkeley

Abstract:The synthetic control method is a popular approach for estimating the impact of a treatment on (typically one) treated unit in settings with many control units and observed pre-treatment outcomes for all units. The key idea is to construct a weighted average of control units, known as a synthetic control (SC), that minimizes imbalance of pre-treatment outcomes between the treated unit and synthetic control. Our main result is that synthetic control weights are numerically equivalent to inverse propensity score weights (IPW) with pre-treatment outcomes as covariates and heavy regularization of the propensity score model. Leveraging a primal-dual connection, we map features of the propensity score model to choices about the specification of the original SC optimization problem. In particular, the original SC method, which balances the L2 norm of pre-treatment outcomes, is identical to IPW with an L2 penalty on the propensity score model and with the least feasible regularization; other choices of balance criteria (e.g., L-infinity norm) correspond to other forms of regularization (e.g., Lasso). We then use this numerical equivalence to make several recommendations for practice. First, as in other settings with high-dimensional covariates, we argue that the estimate can be quite sensitive to the type and level of regularization. Where the researcher has knowledge of the data generating process, there are large returns to choosing an appropriate regularization that emphasizes balance on key covariates --- in general, there is little reason to choose the default regularization implied by the original SC approach. We explore several alternative approaches for setting the regularization, including cross-validation. Second, we directly apply existing results from the literature on estimating causal effects with high-dimensional covariates. For instance, once we view SC as an IPW estimator, it is natural to consider an Augmented IPW estimator, combining the SC weights with an outcome model. Third, we conduct extensive simulation studies to assess the performance of these different estimators in practice. Finally, we use these ideas to assess the impact of the 2012 Kansas tax cuts on economic growth, finding persistent negative effects.

Bio: Avi Feller is an assistant professor at the Goldman School, where he works at the intersection of public policy, data science, and statistics. His methodological research centers on learning more from social policy evaluations, especially randomized experiments. His applied research focuses on working with governments on using data to design, implement, and evaluate policies. Prior to his doctoral studies, Feller served as Special Assistant to the Director at the White House Office of Management and Budget and worked at the Center on Budget and Policy Priorities. Feller received a Ph.D. in Statistics from Harvard University, an M.Sc. in Applied Statistics as a Rhodes Scholar at the University of Oxford, and a B.A. in Political Science and Applied Mathematics from Yale University.




Algorithmically exploiting structure or: how I learned to stop worrying and enjoy real-world graphs
C. Seshadhri, UC Santa Cruz

Abstract:As an algorithms researcher, the existence of heuristics for various computational graph problems (like finding cliques) on real-world graphs bothers me to no end. What is it about (say) social networks that make classically hard problems easy? Or, how do algorithms circumvent various lower bounds when it comes to real-world graphs?
This question has no simple answer, and I will present a tale of two stories on this theme. In the first, we show how to practically and provably count cliques in social networks, by exploiting the local density of these graphs. A curious feature of the mathematics is the use of Turan's theorem in extremal combinatorics, to prove correctness of the algorithm. In the second, we show how to estimate the degree distribution of a graph by sampling a tiny portion of it, by exploiting the heavy tailed structure of the degree distribution. This result uses some recent theoretical techniques in sublinear algorithms to simulate uniform random edge queries through uniform random vertices. In each of these results, there is an interesting interplay of combinatorics, randomized algorithms, and social network analysis.

Bio: C. Seshadhri is an assistant professor of Computer Science at the University of California, Santa Cruz. Prior to joining UCSC, he was a researcher at Sandia National Labs, Livermore in the Information Security Sciences department, during 2010-2014. His primary interest is in mathematical foundations of big data, especially modeling and algorithms. By and large, he works at the boundary of theoretical computer science and data mining. His work spans many areas: sublinear algorithms, graph algorithms, graph modeling, and scalable computation for large data sets.




High-dimensional random geometric graphs
Miklos Racz, Princeton

Abstract:I will talk about two natural random geometric graph models, where connections between vertices depend on distances between latent d-dimensional labels. We are particularly interested in the high-dimensional case when d is large. We study a basic hypothesis testing problem: can we distinguish a random geometric graph from an Erdos-Renyi random graph (which has no geometry)? We show that there exists a computationally efficient procedure which is almost optimal (in an information-theoretic sense). The proofs will highlight new graph statistics as well as connections to random matrices. This is based on joint work with Sebastien Bubeck, Jian Ding, Ronen Eldan, and Jacob Richey.

Bio: Miklos is an Assistant Professor in the ORFE Department at Princeton University. His research focuses on probability, statistics, and its applications. Before Princeton, he spent two years as a postdoc in the Theory Group at Microsoft Research, Redmond. He received his PhD in Statistics from UC Berkeley in 2015, where he was advised by Elchanan Mossel. He also obtained an MS in Computer Science from Berkeley. Previously, he received an MS in Mathematics from the Budapest University of Technology and Economics, under the supervision of Marton Balazs and Balint Toth.




Dynamics of Distributed Updating in Fisher Markets
Richard Cole, NYU

Abstract:A major goal in Algorithmic Game Theory is to justify equilibrium concepts from an algorithmic and complexity perspective. One appealing approach is to identify natural distributed algorithms that converge quickly to an equilibrium. This paper established new convergence results for two generalizations of proportional response in Fisher markets. These results stem from a correspondence with mirror descent algorithms for convex optimization. Several of our new results are a consequence of new notions of strong Bregman convexity and of strong Bregman convex-concave functions, and associated linear rates of convergence, which may be of independent interest. Among other results, we analyze a damped generalized proportional response and show a linear rate of convergence in a Fisher market with buyers whose utility functions cover the full spectrum of CES utilities aside the extremes of linear and Leontief utilities; when these utilities are included, we obtain an empirical O(1/T) rate of convergence. Joint work with Yun Kuen Cheung and Yixin Tao

Bio: Richard Cole is a Silver Professor of Computer Science at NYU. His main interest is the design and analysis of algorithms. He currently concentrates on the following area: algorithmic economic market theory and game theory. He has previously worked on string and pattern matching, amortization, parallelism, network and routing problems. He has also been interested in the use of visualization for algorithm explanation and teaching. He served as department chair from 1994-2000, and subsequently, as deputy director of the Courant Institute from 2003-2013 (with one semester as acting director). He was named a Guggenheim Fellow for the 1988-89 academic year, an ACM Fellow in 1998, and a Silver Professor of Computer Science in 2011.