TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

5A

  • On the Power of Adaptivity in Sparse Recovery Authors: Piotr Indyk and Eric Price and David Woodruff
    The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x^*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x^*$ such that \[ \norm{p}{x-x^*} \le C \min_{k\text{-sparse } x'} \norm{q}{x-x'} \] for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k \log (n/k)$ {\em non-adaptive} measurements~\cite{CRT06:Stable-Signal} and that this bound is tight~\cite{DIPW,FPRU}. In this paper we show that if one is allowed to perform measurements that are {\em adaptive} , then the number of measurements can be considerably reduced. Specifically, for $C=1+\epsilon$ and $p=q=2$ we show:* A scheme with $m=O(\frac{1}{\eps}k \log \log (n\eps/k))$ measurements that uses $O(\sqrt{\log k} \cdot \log \log (n\eps/k))$ rounds. This is a significant improvement over the {\em best possible} non-adaptive bound. * A scheme with $m=O(\frac{1}{\eps}k \log (k/\eps) + k \log (n/k))$ measurements that uses {\em two} rounds. This improves over the {\em best known} non-adaptive bound. To the best of our knowledge, these are the first results of this type.
  • (1+eps)-Approximate Sparse Recovery Authors: Eric Price and David P. Woodruff
    The problem central to sparse recovery and compressive sensing is that of \emph{stable sparse recovery}: we want a distribution $\mathcal{A}$ of matrices $A \in \R^{m \times n}$ such that, for any $x \in \R^n$ and with probability $1 - \delta > 2/3$ over $A \in \mathcal{A}$, there is an algorithm to recover $\hat{x}$ from $Ax$ with \begin{align} \norm{p}{\hat{x} - x} \leq C \min_{k\text{-sparse } x'} \norm{p}{x - x'} \end{align} for some constant $C > 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C > 1$. However, in a variety of applications it is important to obtain $C = 1+\eps$ for a small $\eps > 0$, and this complexity is not well understood. We resolve the dependence on $\eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/\eps^{p/2} \textrm{polylog}(1/\eps)$. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/\eps^p \textrm{polylog}(1/\eps)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.
  • Near-Optimal Column-Based Matrix Reconstruction Authors: Christos Boutsidis and Petros Drineas and Malik Magdon-Ismail
    We consider low-rank reconstruction of a matrix using its columns and we present asymptotically optimal algorithms for both spectral norm and Frobenius norm reconstruction. The main tools we introduce to obtain our results are: (i) the use of fast approximate SVD-like decompositions for column reconstruction, and (ii) two deterministic algorithms for selecting rows from matrices with orthonormal columns, building upon the sparse representation theorem for decompositions of the identity that appeared in~\cite{BSS09}.
  • Near Linear Lower Bound for Dimension Reduction in L1 Authors: Alexandr Andoni and Moses S. Charikar and Ofer Neiman and Huy L. Nguyen
    Given a set of $n$ points in $\ell_{1}$, how many dimensions are needed to represent all pairwise distances within a specific distortion ? This dimension-distortion tradeoff question is well understood for the $\ell_{2}$ norm, where $O((\log n)/\epsilon^{2})$ dimensions suffice to achieve $1+\epsilon$ distortion. In sharp contrast, there is a significant gap between upper and lower bounds for dimension reduction in $\ell_{1}$. A recent result shows that distortion $1+\epsilon$ can be achieved with $n/\epsilon^{2}$ dimensions. On the other hand, the only lower bounds known are that distortion $\delta$ requires $n^{\Omega(1/\delta^2)}$ dimension and that distortion $1+\epsilon$ requires $n^{1/2-O(\epsilon \log(1/\epsilon))}$ dimensions. In this work, we show the first near linear lower bounds for dimension reduction in $\ell_{1}$. In particular, we show that $1+\epsilon$ distortion requires at least $n^{1-O(1/\log(1/\epsilon))}$ dimensions. Our proofs are combinatorial, but inspired by linear programming. In fact, our techniques lead to a simple combinatorial argument that is equivalent to the LP based proof of Brinkman-Charikar for lower bounds on dimension reduction in $\ell_{1}$.