TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

11B

  • Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems Authors: Jian Li and Amol Deshpande
    We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings over probabilistic graphs, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of {\em risk-averse} or {\em risk-prone} behaviors, and instead we consider a more general objective which is to maximize the {\em expected utility} of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). We show that we can obtain a polynomial time approximation algorithm with {\em additive error} $\epsilon$ for any $\epsilon>0$, if there is a pseudopolynomial time algorithm for the {\em exact} version of the problem.(This is true for the problems mentioned above). Our result generalizes several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack.Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems.
  • Approximation Algorithms for Submodular Multiway Partition Authors: Chandra Chekuri and Alina Ene
    We study algorithms for the {\sc Submodular Multiway Partition} problem (\SubMP). An instance of \SubMP consists of a finite ground set $V$, a subset of $k$ elements $S = \{s_1,s_2,\ldots,s_k\}$ called terminals, and a non-negative submodular set function $f:2^V\rightarrow \mathbb{R}_+$ on $V$ provided as a value oracle. The goal is to partition $V$ into $k$ sets $A_1,\ldots,A_k$ such that for $1 \le i \le k$, $s_i \in A_i$ and $\sum_{i=1}^k f(A_i)$ is minimized. \SubMP generalizes some well-known problems such as the {\sc Multiway Cut} problem in graphs and hypergraphs, and the {\sc Node-weighed Multiway Cut} problem in graphs. \SubMP for arbitrary submodular functions (instead of just symmetric functions) was considered by Zhao, Nagamochi and Ibaraki \cite{ZhaoNI05}. Previous algorithms were based on greedy splitting and divide and conquer strategies. In very recent work \cite{ChekuriE11} we proposed a convex-programming relaxation for \SubMP based on the Lov\'asz-extension of a submodular function and showed its applicability for some special cases. In this paper we obtain the following results for arbitrary submodular functions via this relaxation. \begin{itemize} \item A $2$-approximation for \SubMP. This improves the $(k-1)$-approximation from \cite{ZhaoNI05}. \item A $(1.5-1/k)$-approximation for \SubMP when $f$ is {\em symmetric}. This improves the $2(1-1/k)$-approximation from \cite{Queyranne99,ZhaoNI05}. \end{itemize}
  • An FPTAS for #Knapsack and Related Counting Problems Authors: Parikshit Gopalan and Adam Klivans and Raghu Meka and Daniel Stefankovic and Santosh Vempala and Eric Vigoda
    Given n elements with non-negative integer weights w_1,...,w_n and an integer capacity C, we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most C. We give the first deterministic, fully polynomial-time approximation scheme (FPTAS) for estimating the number of solutions to any knapsack constraint (our estimate has relative error $1\pm\epsilon$). Our algorithm is based on dynamic programming. Previously, randomized polynomial-time approximation schemes (FPRAS) were known first by Morris and Sinclair via Markov chain Monte Carlo techniques, and subsequently by Dyer via dynamic programming and rejection sampling. In addition, we present a new method for deterministic approximate counting using read-once branching programs. Our approach yields an FPTAS for several other counting problems, including counting solutions for the multidimensional knapsack problem with a constant number of constraints, the general integer knapsack problem, and the contingency tables problem with a constant number of rows. 40.
  • Approximation Algorithms for Correlated Knaspacks and Non-Martingale Bandits Authors: Anupam Gupta and Ravishankar Krishnaswamy and Marco Molinaro and R. Ravi
    In the stochastic knapsack problem, we are given a knapsack of size B, and a set of items whose sizes and rewards are drawn from a known probability distribution. However, the only way to know the actual size and reward is to schedule the item—when it completes, we get to know these values. The goal is to schedule these items (possibly making adaptive decisions based on the sizes seen thus far) to maximize the expected total reward of items which successfully pack into the knapsack. We know constant-factor approximations when (i) the rewards and sizes are independent of each other, and (ii) we cannot prematurely cancel items after we schedule them. What can we say when either or both of these assumptions are relaxed? Related to this are other stochastic packing problems like the multi-armed bandit (and budgeted learning) problems; here one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to (adaptively) decide which arms to pull, in order to maximize the expected reward obtained after B pulls in total. Much recent work on this problem focus on the case when the evolution of each arm follows a martingale, i.e., when the expected reward from one pull of an arm is the same as the reward at the current state. What can we say when the rewards do not form a martingale? In this paper, we give constant-factor approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations. Extending ideas we develop for this problem, we also give constant-factor approximations for MAB problems without the martingale assumption. Indeed, we can show that previously proposed linear programming relaxations for these problems have large integrality gaps. So we propose new time-indexed LP relaxations; using a decomposition and “gap-filling” approach, we convert these fractional solutions to distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise randomized adaptive scheduling algorithms. We hope our LP formulation and decomposition methods may provide a new way to address other stochastic optimization problems with more general contexts.
  • Evolution with Recombination Authors: Varun Kanade
    Valiant (2007) introduced a computational model of evolution and suggested that Darwinian evolution be studied in the framework of computational learning theory. Valiant describes evolution as a restricted form of learning where exploration is limited to a set of possible mutations and feedback is received by the survival of the fittest mutant. In subsequent work Feldman (2008) showed that evolvability in Valiant's model is equivalent to learning in the correlational statistical query (CSQ) model. We extend Valiant's model to include genetic recombination and show that in certain cases, recombination can significantly speed-up the process of evolution in terms of the number of generations, though at the expense of population size. This follows by a reduction from parallel -CSQ algorithms to evolution with recombination. This gives an exponential speed-up (in terms of the number of generations) over previous known results for evolving conjunctions and halfspaces with respect to restricted distributions.

4

5A

  • On the Power of Adaptivity in Sparse Recovery Authors: Piotr Indyk and Eric Price and David Woodruff
    The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x^*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x^*$ such that \[ \norm{p}{x-x^*} \le C \min_{k\text{-sparse } x'} \norm{q}{x-x'} \] for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k \log (n/k)$ {\em non-adaptive} measurements~\cite{CRT06:Stable-Signal} and that this bound is tight~\cite{DIPW,FPRU}. In this paper we show that if one is allowed to perform measurements that are {\em adaptive} , then the number of measurements can be considerably reduced. Specifically, for $C=1+\epsilon$ and $p=q=2$ we show:* A scheme with $m=O(\frac{1}{\eps}k \log \log (n\eps/k))$ measurements that uses $O(\sqrt{\log k} \cdot \log \log (n\eps/k))$ rounds. This is a significant improvement over the {\em best possible} non-adaptive bound. * A scheme with $m=O(\frac{1}{\eps}k \log (k/\eps) + k \log (n/k))$ measurements that uses {\em two} rounds. This improves over the {\em best known} non-adaptive bound. To the best of our knowledge, these are the first results of this type.
  • (1+eps)-Approximate Sparse Recovery Authors: Eric Price and David P. Woodruff
    The problem central to sparse recovery and compressive sensing is that of \emph{stable sparse recovery}: we want a distribution $\mathcal{A}$ of matrices $A \in \R^{m \times n}$ such that, for any $x \in \R^n$ and with probability $1 - \delta > 2/3$ over $A \in \mathcal{A}$, there is an algorithm to recover $\hat{x}$ from $Ax$ with \begin{align} \norm{p}{\hat{x} - x} \leq C \min_{k\text{-sparse } x'} \norm{p}{x - x'} \end{align} for some constant $C > 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C > 1$. However, in a variety of applications it is important to obtain $C = 1+\eps$ for a small $\eps > 0$, and this complexity is not well understood. We resolve the dependence on $\eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/\eps^{p/2} \textrm{polylog}(1/\eps)$. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/\eps^p \textrm{polylog}(1/\eps)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.
  • Near-Optimal Column-Based Matrix Reconstruction Authors: Christos Boutsidis and Petros Drineas and Malik Magdon-Ismail
    We consider low-rank reconstruction of a matrix using its columns and we present asymptotically optimal algorithms for both spectral norm and Frobenius norm reconstruction. The main tools we introduce to obtain our results are: (i) the use of fast approximate SVD-like decompositions for column reconstruction, and (ii) two deterministic algorithms for selecting rows from matrices with orthonormal columns, building upon the sparse representation theorem for decompositions of the identity that appeared in~\cite{BSS09}.
  • Near Linear Lower Bound for Dimension Reduction in L1 Authors: Alexandr Andoni and Moses S. Charikar and Ofer Neiman and Huy L. Nguyen
    Given a set of $n$ points in $\ell_{1}$, how many dimensions are needed to represent all pairwise distances within a specific distortion ? This dimension-distortion tradeoff question is well understood for the $\ell_{2}$ norm, where $O((\log n)/\epsilon^{2})$ dimensions suffice to achieve $1+\epsilon$ distortion. In sharp contrast, there is a significant gap between upper and lower bounds for dimension reduction in $\ell_{1}$. A recent result shows that distortion $1+\epsilon$ can be achieved with $n/\epsilon^{2}$ dimensions. On the other hand, the only lower bounds known are that distortion $\delta$ requires $n^{\Omega(1/\delta^2)}$ dimension and that distortion $1+\epsilon$ requires $n^{1/2-O(\epsilon \log(1/\epsilon))}$ dimensions. In this work, we show the first near linear lower bounds for dimension reduction in $\ell_{1}$. In particular, we show that $1+\epsilon$ distortion requires at least $n^{1-O(1/\log(1/\epsilon))}$ dimensions. Our proofs are combinatorial, but inspired by linear programming. In fact, our techniques lead to a simple combinatorial argument that is equivalent to the LP based proof of Brinkman-Charikar for lower bounds on dimension reduction in $\ell_{1}$.