TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

9A

  • A Unified Continuous Greedy Algorithm for Submodular Maximization Authors: Moran Feldman and Joseph (Seffi) Naor and Roy Schwartz
    The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the submodular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called ``continuous greedy'', successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications.For general non-monotone submodular objective functions, our algorithm achieves an improved approximation ratio of about 1/e. For monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of 1 - 1/e. Some notable immediate implications are an improved 1/e-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints, and information-theoretic tight approximations for Submodular Max-SAT and Submodular Welfare with k players, for any number of players k. A framework for submodular optimization problems, called the contention resolution framework, was introduced recently by Chekuri et al. The improved approximation ratio of the unified continuous greedy algorithm implies improved approximation ratios for many problems through this framework. Moreover, via a parameter called stopping time, our algorithm merges the relaxation solving and re-normalization steps of the framework, and achieves, for some applications, further improvements. We also describe new monotone balanced contention resolution schemes for various matching, scheduling and packing problems, thus, improving the approximations achieved for these problems via the framework.
  • Enumerative Lattice Algorithms in any Norm via M-ellipsoid Coverings Authors: Daniel Dadush and Chris Peikert and Santosh Vempala
    We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept from asymptotic convex geometry known as the \emph{M-ellipsoid}, and uses as a crucial subroutine the recent algorithm of Micciancio and Voulgaris (STOC 2010) for lattice problems in the $\ell_{2}$ norm. As a main technical contribution, which may be of independent interest, we build on the techniques of Klartag (Geometric and Functional Analysis, 2006) to give an expected $2^{O(n)}$-time algorithm for computing an M-ellipsoid for any $n$-dimensional convex body. As applications, we give deterministic $2^{O(n)}$-time and -space algorithms for solving exact SVP, and exact CVP when the target point is sufficiently close to the lattice, on $n$-dimensional lattices \emph{in any (semi-)norm} given an M-ellipsoid of the unit ball. In many norms of interest, including all $\ell_{p}$ norms, an M-ellipsoid is computable in deterministic $\poly(n)$ time, in which case these algorithms are fully deterministic. Here our approach may be seen as a derandomization of the ``AKS sieve'' for exact SVP and CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). As a further application of our SVP algorithm, we derive an expected $O(f^*(n))^n$-time algorithm for Integer Programming, where $f^*(n)$ denotes the optimal bound in the so-called ``flatness theorem,'' which satisfies $f^*(n) = O(n^{4/3} \polylog(n))$ and is conjectured to be $f^{*}(n)=\Theta(n)$. Our runtime improves upon the previous best of $O(n^{2})^{n}$ by Hildebrand and K{\"o}ppe (2010).
  • A nearly mlogn time solver for SDD linear systems Authors: Ioannis Koutis and Gary L. Miller and Richard Peng
    We present an improved algorithm for solving symmetrically diagonally dominant linear systems. On input of an $n\times n$ symmetric diagonally dominant matrix $A$ with $m$ non-zero entries and a vector $b$ such that $A\bar{x} = b$ for some (unknown) vector $\bar{x}$, our algorithm computers a vector $x$ such that $||{x}-\bar{x}||_A < \epsilon ||\bar{x}||_A $ \footnote{$||\cdot||_A$ denotes the A-norm} in time $${\tilde O}(m\log n \log (1/\epsilon)). The solver utilizes in a standard way a `preconditioning' chain of progressively sparser graphs. To claim the faster running time we make a two-fold improvement in the algorithm for constructing the chain. The new chain exploits previously unknown properties of the graph sparsification algorithm given in [Koutis,Miller,Peng, FOCS 2010], allowing for stronger preconditioning properties. We also present an algorithm of independent interest that constructs nearly-tight low-stretch spanning trees in time $\tilde{O}(m\log{n})$, a factor of $O(\log{n})$ faster than the algorithm in [Abraham,Bartal,Neiman, FOCS 2008]. This speedup directly reflects on the construction time of the preconditioning chain.
  • Balls and Bins: Smaller Hash Families and Faster Evaluation Authors: L. Elisa Celis and Omer Reingold and Gil Segev and Udi Wieder
    A fundamental fact in the analysis of randomized algorithm is that when $n$ balls are hashed into $n$ bins independently and uniformly at random, with high probability each bin contains at most $O(\log n / \log \log n)$ balls. In various applications, however, the assumption that a truly random hash function is available is not always valid, and explicit functions are required. In this paper we study the size of families (or, equivalently, the description length of their functions) that guarantee a maximal load of $O(\log n / \log \log n)$ with high probability, as well as the evaluation time of their functions. Whereas such functions must be described using $\Omega(\log n)$ bits, the best upper bound was formerly $O(\log^2 n / \log \log n)$ bits, which is attained by $O(\log n / \log \log n)$-wise independent functions. Traditional constructions of the latter offer an evaluation time of $O(\log n / \log \log n)$, which according to Siegel's lower bound [FOCS '89] can be reduced only at the cost of significantly increasing the description length. We construct two families that guarantee a maximal load of $O(\log n / \log \log n)$ with high probability. Our constructions are based on two different approaches, and exhibit different trade-offs between the description length and the evaluation time. The first construction shows that $O(\log n / \log \log n)$-wise independence can in fact be replaced by ``gradually increasing independence'', resulting in functions that are described using $O(\log n \log \log n)$ bits and evaluated in time $O(\log n \log \log n)$. The second construction is based on derandomization techniques for space-bounded computations combined with a tailored construction of a pseudorandom generator, resulting in functions that are described using $O(\log^{3/2} n)$ bits and evaluated in time $O(\sqrt{\log n})$. The latter can be compared to Siegel's lower bound stating that $O(\log n / \log \log n)$-wise independent functions that are evaluated in time $O(\sqrt{\log n})$ must be described using $\Omega(2^{\sqrt{\log n}})$ bits.