TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT


  • The Power of Linear Estimators Authors: Gregory Valiant and Paul Valiant
    For a broad class of practically relevant distribution properties, which includes entropy and support size, nearly all of the proposed estimators have an especially simple form. Given a set of independent samples from a discrete distribution, these estimators tally the vector of summary statistics---the number of species seen once, twice, etc. in the sample---and output the dot product between these summary statistics, and a fixed vector of coefficients. We term such estimators \emph{linear}. This historical proclivity towards linear estimators is slightly perplexing, since, despite many efforts over nearly 60 years, all proposed such estimators have significantly suboptimal convergence. Our main result, in some sense vindicating this insistence on linear estimators, is that for any property in this broad class, there exists a near-optimal linear estimator. Additionally, we give a practical and polynomial-time algorithm for constructing such estimators for any given parameters. While this result does not yield explicit bounds on the sample complexities of these estimation tasks, we leverage the insights provided by this result, to give explicit constructions of a linear estimators for three properties: entropy, $L_1$ distance to uniformity, and for pairs of distributions, $L_1$ distance.Our entropy estimator, when given $O(\frac{n}{\eps \log n})$ independent samples from a distribution of support at most $n,$ will estimate the entropy of the distribution to within accuracy $\epsilon$, with probability of failure $o(1/poly(n)).$ From recent lower bounds, this estimator is optimal, to constant factor, both in its dependence on $n$, and its dependence on $\epsilon.$ In particular, the inverse-linear convergence rate of this estimator resolves the main open question of [VV11], which left open the possibility that the error decreased only with the square root of the number of samples. Our distance to uniformity estimator, on given $O(\frac{m}{\eps^2\log m})$ independent samples from any distribution, returns an $\eps$-accurate estimate of the $L_1$ distance to the uniform distribution of support $m$. This is the first sublinear-sample estimator for this problem, and is constant-factor optimal, for constant $\epsilon$. Finally, our framework extends naturally to properties of pairs of distributions, including estimating the $L_1$ distance and KL-divergence between pairs of distributions. We give an explicit linear estimator for estimating $L_1$ distance to accuracy $\epsilon$ using $O(\frac{n}{\eps^2\log n})$ samples from each distribution, which is constant-factor optimal, for constant $\epsilon$.
  • An algebraic proof of a robust social choice impossibility theorem Authors: Dvir Falik and Ehud Friedgut
    An important element of social choice theory are impossibility theorem, such as Arrow's theorem and Gibbard-Satterthwaite's theorem, which state that under certain natural constraints, social choice mechanisms are impossible to construct. In recent years, beginning in Kalai, much work has been done in finding \textit{robust} versions of these theorems, showing that impossibility remains even when the constraints are \textit{almost} always satisfied. In this work we present an Algebraic approach for producing such results. We demonstrate it for a lesser known variant of Arrow's theorem, found in Dokow and Holzman.
  • Planar Graphs: Random Walks and Bipartiteness Testing Authors: Artur Czumaj and Morteza Monemizadeh and Krzysztof Onak and Christian Sohler
    We initiate the study of the testability of properties in arbitrary planar graphs. We prove that bipartiteness can be tested in constant time. The previous bound for this class of graphs was O-tilde(sqrt(n)), and the constant-time testability was only known for planar graphs with bounded degree. Previously used transformations of unbounded-degree sparse graphs into bounded-degree sparse graphs cannot be used to reduce the problem to the testability of bounded-degree planar graphs. Our approach extends to arbitrary minor-free graphs. Our algorithm is based on random walks. The challenge is here to analyze random walks for a class of graphs that has good separators, i.e., bad expansion. Standard techniques that use a fast convergence to a uniform distribution do not work in this case. Roughly speaking, our analysis technique self-reduces the problem of ï¬ nding an odd length cycle in a multigraph G induced by a collection of cycles to another multigraph G′ induced by a set of shorter odd-length cycles, in such a way that when a random walks ï¬ nds a cycle in G′ with probability p>0, then it does so with probability lambda(p)>0 in G. This reduction is applied until the cycles collapse to self-loops that can be easily detected.
  • Testing and Reconstruction of Lipschitz Functions with Applications to Data Privacy Authors: Madhav Jha and Sofya Raskhodnikova
    A function f : D -> R has Lipschitz constant c if dR(f(x), f(y)) <= c dD(x, y) for all x, y in D,where dR and dD denote the distance functions on the range and domain of f, respectively. We say a function is Lipschitz if it has Lipschitz constant 1. (Note that rescaling by a factor of 1=c converts a function with a Lipschitz constant c into a Lipschitz function.) In other words, Lipschitz functions are not very sensitive to small changes in the input. We initiate the study of testing and local reconstruction of the Lipschitz property of functions. A property tester has to distinguish functions with the property (in this case, Lipschitz) from functions that are epsilon-far from having the property, that is, differ from every function with the property on at least an epsilon fraction of the domain. A local filter reconstructs an arbitrary function f to ensure that the reconstructed function g has the desired property (in this case, is Lipschitz), changing f only when necessary. A local filter is given a function f and a query x and, after looking up the value of f on a small number of points, it has to output g(x) for some function g, which has the desired property and does not depend on x. If f has the property, g must be equal to f. We consider functions over domains {0,1}^d, {1,...,n} and {1,...,n}^d, equipped with l1 distance. We design efficient testers of the Lipschitz property for functions of the form f:{0,1}^d -> \delta Z, where \delta \in (0,1] and \delta Z is the set of multiples of \delta, and of the form f: {1,...,n} -> R, where R is (discretely) metrically convex. In the first case, the tester runs in time O(d min{d,r}/\delta\epsilon), where r is the diameter of the image of f; in the second, in time O((\log n)/\epsilon). We give corresponding lower bounds of Omega(d) and Omega(log n) on the query complexity (in the second case, only for nonadaptive 1-sided error testers). Our lower bound for functions over {0,1}^d is tight for the case of the {0,1,2} range and constant \epsilon. The first tester implies an algorithm for functions of the form f:{0,1}^d -> R that distinguishes Lipschitz functions from functions that are \epsilon-far from (1+\delta)-Lipschitz. We also present a local filter of the Lipschitz property for functions of the form f: {1,...,n}^d -> R with lookup complexity O((log n+1)^d). For functions of the form {0,1}^d, we show that every nonadaptive local filter has lookup complexity exponential in d. The testers that we developed have applications to programs analysis. The reconstructors have applications to data privacy. For the first application, the Lipschitz property of the function computed by a program corresponds to a notion of robustness to noise in the data. The application to privacy is based on the fact that a function f of entries in a database of sensitive information can be released with noise of magnitude proportional to a Lipschitz constant of f, while preserving the privacy of individuals whose data is stored in the database (Dwork, McSherry, Nissim and Smith, TCC 2006). We give a differentially private mechanism, based on local filters, for releasing a function f when a Lipschitz constant of f is provided by a distrusted client. We show that when no reliable Lipschitz constant of f is given, previously known differentially private mechanisms either have a substantially higher running time or have a higher expected error for a large class of symmetric functions f.


  • How to Play Unique Games Against a Semi-Random Adversary Authors: Alexandra Kolla and Konstantin Makarychev and Yury Makarychev
    In this paper, we study the average case complexity of the Unique Games problem. We propose a natural semi-random model, in which a unique game instance is generated in several steps. First an adversary selects a completely satisfiable instance of Unique Games, then she chooses an epsilon fraction of all edges, and finally replaces ("corrupts") the constraints corresponding to these edges with new constraints. If all steps are adversarial, the adversary can obtain any (1-epsilon) satisfiable instance, so then the problem is as hard as in the worst case. We show that known algorithms for unique games (in particular, all algorithms that use the standard SDP relaxation) fail to solve semi-random instances of Unique Games. We present an algorithm that with high probability finds a solution satisfying a (1-delta) fraction of all constraints in semi-random instances (we require that the average degree of the graph is Omega(log k)). To this end, we consider a new non-standard SDP program for Unique Games, which is not a relaxation for the problem, and show how to analyze it. We present a new rounding scheme that simultaneously uses SDP and LP solutions, which we believe is of independent interest. Finally, we study semi-random instances of Unique Games that are at most (1-epsilon) satisfiable. We present an algorithm that distinguishes between the case when the instance is a semi-random instance and the case when the instance is an (arbitrary) (1-delta)-satisfiable instance if epsilon > c delta (for some absolute constant c).
  • The Grothendieck constant is strictly smaller than Krivine's bound Authors: Mark Braverman, Konstantin Makarychev, Yury Makarychev, Assaf Naor
    We prove that $K_G<\frac{\pi}{2\log\left(1+\sqrt{2}\right)}$, where $K_G$ is the Grothendieck constant.
  • A Parallel Approximation Algorithm for Positive Semidefinite Programming Authors: Rahul Jain and Penghui Yao
    Positive semidefinite programs are an important subclass of semidefinite programs in which all matrices involved in the specification of the problem are positive semidefinite and all scalars involved are non-negative. We present a parallel algorithm, which given an instance of a positive semidefinite program of size N and an approximation factor eps > 0, runs in (parallel) time poly(1/eps) polylog(N), using poly(N) processors, and outputs a value which is within multiplicative factor of (1+eps) to the optimal. Our result generalizes analogous result of Luby and Nisan [1993] for positive linear programs and our algorithm is inspired by their algorithm.
  • Rounding Semidefinite Programming Hierarchies via Global Correlation Authors: Boaz Barak and Prasad Raghavendra and David Steurer
    We show a new way to round vector solutions of semidefinite programming (SDP) hierarchies into integral solutions, based on a connection between these hierarchies and the spectrum of the input graph. We demonstrate the utility of our method by providing a new SDP-hierarchy based algorithm for Unique Games. Our algorithm matches the performance of the recent algorithm of Arora, Barak and Steurer (FOCS 2010) in the worst-case, but is shown to run in polynomial time on a richer family of instances, thus ruling out more possibilities for hard instances for the Unique Games Conjecture. Specifically, we give a rounding algorithm for $O(r)$ levels of the Lasserre hierarchy that finds a good integral solution as long as, very roughly speaking, the average correlation between vectors in the SDP solution is at least $1/r$. In the case of Unique Games, the latter condition is implied by having at most $r$ large eigenvalues in the constraint graph. This improves upon prior works that required the potentially stronger condition of a bound on the number of eigenvalues in the \emph{label extended graph}. Our algorithm actually requires less than the $n^{O(r)}$ constraints specified by the $r^{th}$ level of the Lasserre hierarchy, and in particular $r$ rounds of our program can be evaluated in time $2^{O(r)}\mathrm{poly}(n)$.
  • Lasserre Hierarchy, Higher Eigenvalues, and Approximation Schemes for Graph Partitioning and Quadratic Integer Programming with PSD Objectives Authors: Venkatesan Guruswami and Ali Kemal Sinop
    We present an approximation scheme for optimizing certain Quadratic Integer Programming problems with positive semidefinite objective functions and global linear constraints. This framework includes well known graph problems such as Uniform sparsest cut, Minimum graph bisection, and Small Set expansion, as well as the Unique Games problem. These problems are notorious for the existence of huge gaps between the known algorithmic results and NP-hardness results. Our algorithm is based on rounding semidefinite programs from the Lasserre hierarchy, and the analysis uses bounds for low-rank approximations of a matrix in Frobenius norm using columns of the matrix.For all the above graph problems, we give an algorithm running in time $n^{O(r/\eps^2)}$ with approximation ratio $\frac{1+\eps}{\min\{1,\lambda_r\}}$, where $\lambda_r$ is the $r$'th smallest eigenvalue of the normalized graph Laplacian $\Lnorm$. In the case of graph bisection and small set expansion, the number of vertices in the cut is within lower-order terms of the stipulated bound. Our results imply $(1+O(\eps))$ factor approximation in time $n^{O(r^\ast)}$ where $r^\ast$ is the number of eigenvalues of $\Lnorm$ smaller $1-\eps$. This perhaps gives some indication as to why even showing mere APX-hardness for these problems has been elusive, since the reduction must produce graphs with a slowly growing spectrum (and classes like planar graphs which are known to have such a spectral property often admit good algorithms owing to their nice structure). For Unique Games, we give a factor $(1+\frac{2+\eps}{\lambda_r})$ approximation for minimizing the number of unsatisfied constraints in $n^{O(r/\eps)}$ time. This improves an earlier bound for solving Unique Games on expanders, and also shows that Lasserre SDPs are powerful enough to solve well-known integrality gap instances for the basic SDP. We also give an algorithm for independent sets in graphs that performs well when the Laplacian does not have too many eigenvalues bigger than $1+o(1)$.


  • Markov Layout Authors: Flavio Chierichetti and Ravi Kumar and Prabhakar Raghavan
    Consider the following problem of laying out a set of $n$ images that match a query onto the nodes of a $\sqrt{n}\times\sqrt{n}$ grid. We are given a score for each image, as well as the distribution of patterns by which a user's eye scans the nodes of the grid and we wish to maximize the expected total score of images selected by the user. This is a special case of the \emph{Markov layout problem}, in which we are given a Markov chain $M$ together with a set o f objects to be placed at the states of the Markov chain. Each object has a utility to the user if viewed, as well as a stopping probability with which the user ceases to look further at objects. We point out that this layout problem is prototypical in a number of applications in web search and advertising, particularly in the emerging genre of search results pages from major engines. In a different class of applications, the states of the Markov chain are web pages at a publishers website and the objects are advertisements. In this paper we study the approximability of the Markov layout problem. Our main result is an $O(\log n)$ approximation algorithm for the most general version of the problem. The core idea behind the algorithm is to transform an optimization problem over partial permutations into an optimization problem over sets by losing a logarithmic factor in approximation; the latter problem is then shown to be submodular with two matroid constraints, which admits a constant-factor approximation. In contrast, we also show the problem is APX-hard via a reduction from {\sc Cubic Max-Bisection}. We then study harder variants of the problem in which no \emph{gaps} --- states of $M$ with no object placed on them --- are allowed. By exploiting the geometry, we obtain an $O(\log^{3/2} n)$ approximation algorithm when the digraph underlying $M$ is a grid and an $O(\log n)$ approximation algorithm when it is a tree. These special cases are especially appropriate for our applications.
  • Limitations of Randomized Mechanisms for Combinatorial Auctions Authors: Shaddin Dughmi and Jan Vondrak
    The design of computationally efficient and incentive compatible mechanisms that solve or approximate fundamental resource allocation problems is the main goal of algorithmic mechanism design. A central example in both theory and practice is welfare-maximization in combinatorial auctions. Recently, a randomized mechanism has been discovered for combinatorial auctions that is truthful in expectation and guarantees a $(1-1/e)$-approximation to the optimal social welfare when players have coverage valuations \cite{DRY11}. This approximation ratio is the best possible even for non-truthful algorithms, assuming $P \neq NP$ \cite{KLMM05}. Given the recent sequence of negative results for combinatorial auctions under more restrictive notions of incentive compatibility \cite{DN07,BDFKMPSSU10,Dobzin11}, this development raises a natural question: Are truthful-in-expectation mechanisms compatible with polynomial-time approximation in a way that deterministic or universally truthful mechanisms are not? In particular, can polynomial-time truthful-in-expectation mechanisms guarantee a near-optimal approximation ratio for more general variants of combinatorial auctions? We prove that this is not the case. Specifically, the result of \cite{DRY11} cannot be extended to combinatorial auctions with submodular valuations in the value oracle model. (Absent strategic considerations, a $(1-1/e)$-approximation is still achievable in this setting \cite{V08}.) More precisely, we prove that there is a constant $\gamma>0$ such that there is no randomized mechanism that is truthful-in-expectation --- or even approximately truthful-in-expectation --- and guarantees an $m^{-\gamma}$-approximation to the optimal social welfare for combinatorial auctions with submodular valuations in the value oracle model. We also prove an analogous result for the flexible combinatorial public projects (CPP) problem, where a truthful-in-expectation $(1-1/e)$-approximation for coverage valuations has been recently developed \cite{Dughmi11}. We show that there is no truthful-in-expectation --- or even approximately truthful-in-expectation --- mechanism that achieves an $m^{-\gamma}$-approximation to the optimal social welfare for combinatorial public projects with submodular valuations in the value oracle model. Both our results present an unexpected separation between coverage functions and submodular functions, which does not occur for these problems without strategic considerations.
  • Bayesian Combinatorial Auctions: Expanding Single Buyer Mechanisms to Many Buyers Authors: Saeed Alaei
    For Bayesian combinatorial auctions, we present a general framework for reducing the mechanism design problem for many buyers to the mechanism design problem for one buyer. Our generic reduction works for any separable objective (e.g., welfare, revenue, etc) and any space of valuations (e.g. submodular, additive, etc) and any distribution of valuations as long as valuations of different buyers are distributed independently (not necessarily identically). Roughly speaking, we present two generic $n$-buyer mechanisms that use $1$-buyer mechanisms as black boxes. We show that if we have an $\alpha$-approximate $1$-buyer mechanism for each buyer\footnote{Note that we can use different $1$-buyer mechanisms for different buyers.} then our generic $n$-buyer mechanisms are $\frac{1}{2}\alpha$-approximation of the optimal $n$-buyer mechanism. Furthermore, if we have several copies of each item and no buyer ever needs more than $\frac{1}{k}$ of all copies of each item then our generic $n$-buyer mechanisms are $\gamma_k \alpha$-approximation of the optimal $n$-buyer mechanism where $\gamma_k \ge 1-\frac{1}{\sqrt{k+3}}$. Observe that $\gamma_k$ is at least $\frac{1}{2}$ and approaches $1$ as $k$ increases. Applications of our main theorem include the following improvements on results from the literature. For each of the following models we construct a $1$-buyer mechanism and then apply our generic expansion: For revenue maximization in combinatorial auctions with hard budget constraints, \cite{BGGM10} presented a $\frac{1}{4}$-approximate BIC mechanism for additive/correlated valuations and an $O(1)$-approximate\footnote{$O(1)=\frac{1}{96}$} sequential posted pricing mechanism for additive/independent valuations. We improve this to a $\gamma_k$-approximate BIC mechanism and a $\gamma_k (1-\frac{1}{e})$-approximate sequential posted pricing mechanism respectively. For revenue maximization in combinatorial auctions with unit demand buyers, \cite{CHMS10} presented a $\frac{1}{6.75}$-approximate sequential posted pricing mechanism. We improve this to a $\frac{1}{2} \gamma_k$ approximate sequential posted pricing mechanism. We also present a $\gamma_k$-approximate sequential posted pricing mechanism for unit-demand multi-unit auctions(homogeneous) with hard-budget constraints. Furthermore, our sequential posted pricing mechanisms assume no control or prior information about the order in which buyers arrive.
  • Extreme-Value Theorems for Optimal Multidimensional Pricing Authors: Yang Cai and Constantinos Daskalakis
    We provide a Polynomial Time Approximation Scheme for the {\em multi-dimensional unit-demand pricing problem}, when the buyer's values are independent (but not necessarily identically distributed.) For all $\epsilon>0$, we obtain a $(1+\epsilon)$-factor approximation to the optimal revenue in time polynomial, when the values are sampled from Monotone Hazard Rate (MHR) distributions, quasi-polynomial, when sampled from regular distributions, and polynomial in $n^{{\rm poly}(\log r)}$, when sampled from general distributions supported on a set $[u_{min}, r u_{min}]$. We also provide an additive PTAS for all bounded distributions.Our algorithms are based on novel extreme value theorems for MHR and regular distributions, and apply probabilistic techniques to understand the statistical properties of revenue distributions, as well as to reduce the size of the search space of the algorithm. As a byproduct of our techniques, we establish structural properties of optimal solutions. We show that, for all $\epsilon >0$, $g(1/\epsilon)$ distinct prices suffice to obtain a $(1+\epsilon)$-factor approximation to the optimal revenue for MHR distributions, where $g(1/\epsilon)$ is a quasi-linear function of $1/\epsilon$ that does not depend on the number of items. Similarly, for all $\epsilon>0$ and $n>0$, $g(1/\epsilon \cdot \log n)$ distinct prices suffice for regular distributions, where $n$ is the number of items and $g(\cdot)$ is a polynomial function. Finally, in the i.i.d. MHR case, we show that, as long as the number of items is a sufficiently large function of $1/\epsilon$, a single price suffices to achieve a $(1+\epsilon)$-factor approximation.
  • Efficient computation of approximate pure Nash equilibria in congestion games Authors: Ioannis Caragiannis and Angelo Fanelli and Nick Gravin and Alexander Skopalik
    Congestion games constitute an important class of games in which computing an exact or even approximate pure Nash equilibrium is in general {\sf PLS}-complete. We present a surprisingly simple polynomial-time algorithm that computes $O(1)$-approximate Nash equilibria in these games. In particular, for congestion games with linear latency functions, our algorithm computes $(2+\epsilon)$-approximate pure Nash equilibria in time polynomial in the number of players, the number of resources and $1/\epsilon$. It also applies to games with polynomial latency functions with constant maximum degree $d$; there, the approximation guarantee is $d^{O(d)}$. The algorithm essentially identifies a polynomially long sequence of best-response moves that lead to an approximate equilibrium; the existence of such short sequences is interesting in itself. These are the first positive algorithmic results for approximate equilibria in non-symmetric congestion games. We strengthen them further by proving that, for congestion games that deviate from our mild assumptions, computing $\rho$-approximate equilibria is {\sf PLS}-complete for any polynomial-time computable $\rho$.