TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

7B

  • Markov Layout Authors: Flavio Chierichetti and Ravi Kumar and Prabhakar Raghavan
    Consider the following problem of laying out a set of $n$ images that match a query onto the nodes of a $\sqrt{n}\times\sqrt{n}$ grid. We are given a score for each image, as well as the distribution of patterns by which a user's eye scans the nodes of the grid and we wish to maximize the expected total score of images selected by the user. This is a special case of the \emph{Markov layout problem}, in which we are given a Markov chain $M$ together with a set o f objects to be placed at the states of the Markov chain. Each object has a utility to the user if viewed, as well as a stopping probability with which the user ceases to look further at objects. We point out that this layout problem is prototypical in a number of applications in web search and advertising, particularly in the emerging genre of search results pages from major engines. In a different class of applications, the states of the Markov chain are web pages at a publishers website and the objects are advertisements. In this paper we study the approximability of the Markov layout problem. Our main result is an $O(\log n)$ approximation algorithm for the most general version of the problem. The core idea behind the algorithm is to transform an optimization problem over partial permutations into an optimization problem over sets by losing a logarithmic factor in approximation; the latter problem is then shown to be submodular with two matroid constraints, which admits a constant-factor approximation. In contrast, we also show the problem is APX-hard via a reduction from {\sc Cubic Max-Bisection}. We then study harder variants of the problem in which no \emph{gaps} --- states of $M$ with no object placed on them --- are allowed. By exploiting the geometry, we obtain an $O(\log^{3/2} n)$ approximation algorithm when the digraph underlying $M$ is a grid and an $O(\log n)$ approximation algorithm when it is a tree. These special cases are especially appropriate for our applications.
  • Limitations of Randomized Mechanisms for Combinatorial Auctions Authors: Shaddin Dughmi and Jan Vondrak
    The design of computationally efficient and incentive compatible mechanisms that solve or approximate fundamental resource allocation problems is the main goal of algorithmic mechanism design. A central example in both theory and practice is welfare-maximization in combinatorial auctions. Recently, a randomized mechanism has been discovered for combinatorial auctions that is truthful in expectation and guarantees a $(1-1/e)$-approximation to the optimal social welfare when players have coverage valuations \cite{DRY11}. This approximation ratio is the best possible even for non-truthful algorithms, assuming $P \neq NP$ \cite{KLMM05}. Given the recent sequence of negative results for combinatorial auctions under more restrictive notions of incentive compatibility \cite{DN07,BDFKMPSSU10,Dobzin11}, this development raises a natural question: Are truthful-in-expectation mechanisms compatible with polynomial-time approximation in a way that deterministic or universally truthful mechanisms are not? In particular, can polynomial-time truthful-in-expectation mechanisms guarantee a near-optimal approximation ratio for more general variants of combinatorial auctions? We prove that this is not the case. Specifically, the result of \cite{DRY11} cannot be extended to combinatorial auctions with submodular valuations in the value oracle model. (Absent strategic considerations, a $(1-1/e)$-approximation is still achievable in this setting \cite{V08}.) More precisely, we prove that there is a constant $\gamma>0$ such that there is no randomized mechanism that is truthful-in-expectation --- or even approximately truthful-in-expectation --- and guarantees an $m^{-\gamma}$-approximation to the optimal social welfare for combinatorial auctions with submodular valuations in the value oracle model. We also prove an analogous result for the flexible combinatorial public projects (CPP) problem, where a truthful-in-expectation $(1-1/e)$-approximation for coverage valuations has been recently developed \cite{Dughmi11}. We show that there is no truthful-in-expectation --- or even approximately truthful-in-expectation --- mechanism that achieves an $m^{-\gamma}$-approximation to the optimal social welfare for combinatorial public projects with submodular valuations in the value oracle model. Both our results present an unexpected separation between coverage functions and submodular functions, which does not occur for these problems without strategic considerations.
  • Bayesian Combinatorial Auctions: Expanding Single Buyer Mechanisms to Many Buyers Authors: Saeed Alaei
    For Bayesian combinatorial auctions, we present a general framework for reducing the mechanism design problem for many buyers to the mechanism design problem for one buyer. Our generic reduction works for any separable objective (e.g., welfare, revenue, etc) and any space of valuations (e.g. submodular, additive, etc) and any distribution of valuations as long as valuations of different buyers are distributed independently (not necessarily identically). Roughly speaking, we present two generic $n$-buyer mechanisms that use $1$-buyer mechanisms as black boxes. We show that if we have an $\alpha$-approximate $1$-buyer mechanism for each buyer\footnote{Note that we can use different $1$-buyer mechanisms for different buyers.} then our generic $n$-buyer mechanisms are $\frac{1}{2}\alpha$-approximation of the optimal $n$-buyer mechanism. Furthermore, if we have several copies of each item and no buyer ever needs more than $\frac{1}{k}$ of all copies of each item then our generic $n$-buyer mechanisms are $\gamma_k \alpha$-approximation of the optimal $n$-buyer mechanism where $\gamma_k \ge 1-\frac{1}{\sqrt{k+3}}$. Observe that $\gamma_k$ is at least $\frac{1}{2}$ and approaches $1$ as $k$ increases. Applications of our main theorem include the following improvements on results from the literature. For each of the following models we construct a $1$-buyer mechanism and then apply our generic expansion: For revenue maximization in combinatorial auctions with hard budget constraints, \cite{BGGM10} presented a $\frac{1}{4}$-approximate BIC mechanism for additive/correlated valuations and an $O(1)$-approximate\footnote{$O(1)=\frac{1}{96}$} sequential posted pricing mechanism for additive/independent valuations. We improve this to a $\gamma_k$-approximate BIC mechanism and a $\gamma_k (1-\frac{1}{e})$-approximate sequential posted pricing mechanism respectively. For revenue maximization in combinatorial auctions with unit demand buyers, \cite{CHMS10} presented a $\frac{1}{6.75}$-approximate sequential posted pricing mechanism. We improve this to a $\frac{1}{2} \gamma_k$ approximate sequential posted pricing mechanism. We also present a $\gamma_k$-approximate sequential posted pricing mechanism for unit-demand multi-unit auctions(homogeneous) with hard-budget constraints. Furthermore, our sequential posted pricing mechanisms assume no control or prior information about the order in which buyers arrive.
  • Extreme-Value Theorems for Optimal Multidimensional Pricing Authors: Yang Cai and Constantinos Daskalakis
    We provide a Polynomial Time Approximation Scheme for the {\em multi-dimensional unit-demand pricing problem}, when the buyer's values are independent (but not necessarily identically distributed.) For all $\epsilon>0$, we obtain a $(1+\epsilon)$-factor approximation to the optimal revenue in time polynomial, when the values are sampled from Monotone Hazard Rate (MHR) distributions, quasi-polynomial, when sampled from regular distributions, and polynomial in $n^{{\rm poly}(\log r)}$, when sampled from general distributions supported on a set $[u_{min}, r u_{min}]$. We also provide an additive PTAS for all bounded distributions.Our algorithms are based on novel extreme value theorems for MHR and regular distributions, and apply probabilistic techniques to understand the statistical properties of revenue distributions, as well as to reduce the size of the search space of the algorithm. As a byproduct of our techniques, we establish structural properties of optimal solutions. We show that, for all $\epsilon >0$, $g(1/\epsilon)$ distinct prices suffice to obtain a $(1+\epsilon)$-factor approximation to the optimal revenue for MHR distributions, where $g(1/\epsilon)$ is a quasi-linear function of $1/\epsilon$ that does not depend on the number of items. Similarly, for all $\epsilon>0$ and $n>0$, $g(1/\epsilon \cdot \log n)$ distinct prices suffice for regular distributions, where $n$ is the number of items and $g(\cdot)$ is a polynomial function. Finally, in the i.i.d. MHR case, we show that, as long as the number of items is a sufficiently large function of $1/\epsilon$, a single price suffices to achieve a $(1+\epsilon)$-factor approximation.
  • Efficient computation of approximate pure Nash equilibria in congestion games Authors: Ioannis Caragiannis and Angelo Fanelli and Nick Gravin and Alexander Skopalik
    Congestion games constitute an important class of games in which computing an exact or even approximate pure Nash equilibrium is in general {\sf PLS}-complete. We present a surprisingly simple polynomial-time algorithm that computes $O(1)$-approximate Nash equilibria in these games. In particular, for congestion games with linear latency functions, our algorithm computes $(2+\epsilon)$-approximate pure Nash equilibria in time polynomial in the number of players, the number of resources and $1/\epsilon$. It also applies to games with polynomial latency functions with constant maximum degree $d$; there, the approximation guarantee is $d^{O(d)}$. The algorithm essentially identifies a polynomially long sequence of best-response moves that lead to an approximate equilibrium; the existence of such short sequences is interesting in itself. These are the first positive algorithmic results for approximate equilibria in non-symmetric congestion games. We strengthen them further by proving that, for congestion games that deviate from our mild assumptions, computing $\rho$-approximate equilibria is {\sf PLS}-complete for any polynomial-time computable $\rho$.

8

  • On Range Searching in the Group Model and Combinatorial Discrepancy Authors: Kasper Green Larsen
    In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_u t_q = \Omega(\disc^2/\lg n)$ where $t_u$ is the worst case update time, $t_q$ the worst case query time and $\disc$ is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching problems. We list a few of them in the following: \begin{itemize} \item For halfspace range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = \Omega(n^{1-1/d}/\lg n)$. This comes within a $\lg n \lg \lg n$ factor of the best known upper bound. \item For orthogonal range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = \Omega(\lg^{d-2+\mu(d)}n)$, where $\mu(d)>0$ is some small but strictly positive function of $d$. \item For ball range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = \Omega(n^{1-1/d}/\lg n)$. \end{itemize} We note that the previous highest lower bound for any explicit problem, due to Patrascu [STOC'07], states that $t_q =\Omega((\lg n/\lg(\lg n+t_u))^2)$, which does however hold for a less restrictive class of data structures. Our result also has implications for the field of combinatorial discrepancy. Using textbook range searching solutions, we improve on the best known discrepancy upper bound for axis-aligned rectangles in dimensions $d \geq 3$.
  • A Randomized Rounding Approach to the Traveling Salesman Problem Authors: Shayan Oveis Gharan and Amin Saberi and Mohit Singh
    For some positive constant $\epsilon_0$, we give a $(\frac{3}{2}-\epsilon_0$)-approximation algorithm for the following problem: given a graph $G_0=(V,E_0)$, find the shortest tour that visits every vertex at least once. This is a special case of the metric traveling salesman problem when the underlying metric is defined by shortest path distances in $G_0$. The result improves on the $\frac{3}{2}$-approximation algorithm due to Christofides for this special case.Similar to Christofides, our algorithm finds a spanning tree whose cost is upper bounded by the optimum, then it finds the minimum cost Eulerian augmentation (or T-join) of that tree. The main difference is in the selection of the spanning tree. Except in certain cases where the solution of LP is nearly integral, we select the spanning tree randomly by sampling from a maximum entropy distribution defined by the linear programming relaxation. Despite the simplicity of the algorithm, the analysis builds on a variety of ideas such as properties of strongly Rayleigh measures from probability theory, graph theoretical results on the structure of near minimum cuts, and the integrality of the T-join polytope from polyhedral theory. Also, as a byproduct of our result, we show new properties of the near minimum cuts of any graph, which may be of independent interest.
  • Approximating Graphic TSP by Matchings Authors: Tobias Moemke and Ola Svensson
    We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.

9A

  • A Unified Continuous Greedy Algorithm for Submodular Maximization Authors: Moran Feldman and Joseph (Seffi) Naor and Roy Schwartz
    The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the submodular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called ``continuous greedy'', successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications.For general non-monotone submodular objective functions, our algorithm achieves an improved approximation ratio of about 1/e. For monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of 1 - 1/e. Some notable immediate implications are an improved 1/e-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints, and information-theoretic tight approximations for Submodular Max-SAT and Submodular Welfare with k players, for any number of players k. A framework for submodular optimization problems, called the contention resolution framework, was introduced recently by Chekuri et al. The improved approximation ratio of the unified continuous greedy algorithm implies improved approximation ratios for many problems through this framework. Moreover, via a parameter called stopping time, our algorithm merges the relaxation solving and re-normalization steps of the framework, and achieves, for some applications, further improvements. We also describe new monotone balanced contention resolution schemes for various matching, scheduling and packing problems, thus, improving the approximations achieved for these problems via the framework.
  • Enumerative Lattice Algorithms in any Norm via M-ellipsoid Coverings Authors: Daniel Dadush and Chris Peikert and Santosh Vempala
    We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept from asymptotic convex geometry known as the \emph{M-ellipsoid}, and uses as a crucial subroutine the recent algorithm of Micciancio and Voulgaris (STOC 2010) for lattice problems in the $\ell_{2}$ norm. As a main technical contribution, which may be of independent interest, we build on the techniques of Klartag (Geometric and Functional Analysis, 2006) to give an expected $2^{O(n)}$-time algorithm for computing an M-ellipsoid for any $n$-dimensional convex body. As applications, we give deterministic $2^{O(n)}$-time and -space algorithms for solving exact SVP, and exact CVP when the target point is sufficiently close to the lattice, on $n$-dimensional lattices \emph{in any (semi-)norm} given an M-ellipsoid of the unit ball. In many norms of interest, including all $\ell_{p}$ norms, an M-ellipsoid is computable in deterministic $\poly(n)$ time, in which case these algorithms are fully deterministic. Here our approach may be seen as a derandomization of the ``AKS sieve'' for exact SVP and CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). As a further application of our SVP algorithm, we derive an expected $O(f^*(n))^n$-time algorithm for Integer Programming, where $f^*(n)$ denotes the optimal bound in the so-called ``flatness theorem,'' which satisfies $f^*(n) = O(n^{4/3} \polylog(n))$ and is conjectured to be $f^{*}(n)=\Theta(n)$. Our runtime improves upon the previous best of $O(n^{2})^{n}$ by Hildebrand and K{\"o}ppe (2010).
  • A nearly mlogn time solver for SDD linear systems Authors: Ioannis Koutis and Gary L. Miller and Richard Peng
    We present an improved algorithm for solving symmetrically diagonally dominant linear systems. On input of an $n\times n$ symmetric diagonally dominant matrix $A$ with $m$ non-zero entries and a vector $b$ such that $A\bar{x} = b$ for some (unknown) vector $\bar{x}$, our algorithm computers a vector $x$ such that $||{x}-\bar{x}||_A < \epsilon ||\bar{x}||_A $ \footnote{$||\cdot||_A$ denotes the A-norm} in time $${\tilde O}(m\log n \log (1/\epsilon)). The solver utilizes in a standard way a `preconditioning' chain of progressively sparser graphs. To claim the faster running time we make a two-fold improvement in the algorithm for constructing the chain. The new chain exploits previously unknown properties of the graph sparsification algorithm given in [Koutis,Miller,Peng, FOCS 2010], allowing for stronger preconditioning properties. We also present an algorithm of independent interest that constructs nearly-tight low-stretch spanning trees in time $\tilde{O}(m\log{n})$, a factor of $O(\log{n})$ faster than the algorithm in [Abraham,Bartal,Neiman, FOCS 2008]. This speedup directly reflects on the construction time of the preconditioning chain.
  • Balls and Bins: Smaller Hash Families and Faster Evaluation Authors: L. Elisa Celis and Omer Reingold and Gil Segev and Udi Wieder
    A fundamental fact in the analysis of randomized algorithm is that when $n$ balls are hashed into $n$ bins independently and uniformly at random, with high probability each bin contains at most $O(\log n / \log \log n)$ balls. In various applications, however, the assumption that a truly random hash function is available is not always valid, and explicit functions are required. In this paper we study the size of families (or, equivalently, the description length of their functions) that guarantee a maximal load of $O(\log n / \log \log n)$ with high probability, as well as the evaluation time of their functions. Whereas such functions must be described using $\Omega(\log n)$ bits, the best upper bound was formerly $O(\log^2 n / \log \log n)$ bits, which is attained by $O(\log n / \log \log n)$-wise independent functions. Traditional constructions of the latter offer an evaluation time of $O(\log n / \log \log n)$, which according to Siegel's lower bound [FOCS '89] can be reduced only at the cost of significantly increasing the description length. We construct two families that guarantee a maximal load of $O(\log n / \log \log n)$ with high probability. Our constructions are based on two different approaches, and exhibit different trade-offs between the description length and the evaluation time. The first construction shows that $O(\log n / \log \log n)$-wise independence can in fact be replaced by ``gradually increasing independence'', resulting in functions that are described using $O(\log n \log \log n)$ bits and evaluated in time $O(\log n \log \log n)$. The second construction is based on derandomization techniques for space-bounded computations combined with a tailored construction of a pseudorandom generator, resulting in functions that are described using $O(\log^{3/2} n)$ bits and evaluated in time $O(\sqrt{\log n})$. The latter can be compared to Siegel's lower bound stating that $O(\log n / \log \log n)$-wise independent functions that are evaluated in time $O(\sqrt{\log n})$ must be described using $\Omega(2^{\sqrt{\log n}})$ bits.