TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

9A

  • A Unified Continuous Greedy Algorithm for Submodular Maximization Authors: Moran Feldman and Joseph (Seffi) Naor and Roy Schwartz
    The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the submodular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called ``continuous greedy'', successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications.For general non-monotone submodular objective functions, our algorithm achieves an improved approximation ratio of about 1/e. For monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of 1 - 1/e. Some notable immediate implications are an improved 1/e-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints, and information-theoretic tight approximations for Submodular Max-SAT and Submodular Welfare with k players, for any number of players k. A framework for submodular optimization problems, called the contention resolution framework, was introduced recently by Chekuri et al. The improved approximation ratio of the unified continuous greedy algorithm implies improved approximation ratios for many problems through this framework. Moreover, via a parameter called stopping time, our algorithm merges the relaxation solving and re-normalization steps of the framework, and achieves, for some applications, further improvements. We also describe new monotone balanced contention resolution schemes for various matching, scheduling and packing problems, thus, improving the approximations achieved for these problems via the framework.
  • Enumerative Lattice Algorithms in any Norm via M-ellipsoid Coverings Authors: Daniel Dadush and Chris Peikert and Santosh Vempala
    We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept from asymptotic convex geometry known as the \emph{M-ellipsoid}, and uses as a crucial subroutine the recent algorithm of Micciancio and Voulgaris (STOC 2010) for lattice problems in the $\ell_{2}$ norm. As a main technical contribution, which may be of independent interest, we build on the techniques of Klartag (Geometric and Functional Analysis, 2006) to give an expected $2^{O(n)}$-time algorithm for computing an M-ellipsoid for any $n$-dimensional convex body. As applications, we give deterministic $2^{O(n)}$-time and -space algorithms for solving exact SVP, and exact CVP when the target point is sufficiently close to the lattice, on $n$-dimensional lattices \emph{in any (semi-)norm} given an M-ellipsoid of the unit ball. In many norms of interest, including all $\ell_{p}$ norms, an M-ellipsoid is computable in deterministic $\poly(n)$ time, in which case these algorithms are fully deterministic. Here our approach may be seen as a derandomization of the ``AKS sieve'' for exact SVP and CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). As a further application of our SVP algorithm, we derive an expected $O(f^*(n))^n$-time algorithm for Integer Programming, where $f^*(n)$ denotes the optimal bound in the so-called ``flatness theorem,'' which satisfies $f^*(n) = O(n^{4/3} \polylog(n))$ and is conjectured to be $f^{*}(n)=\Theta(n)$. Our runtime improves upon the previous best of $O(n^{2})^{n}$ by Hildebrand and K{\"o}ppe (2010).
  • A nearly mlogn time solver for SDD linear systems Authors: Ioannis Koutis and Gary L. Miller and Richard Peng
    We present an improved algorithm for solving symmetrically diagonally dominant linear systems. On input of an $n\times n$ symmetric diagonally dominant matrix $A$ with $m$ non-zero entries and a vector $b$ such that $A\bar{x} = b$ for some (unknown) vector $\bar{x}$, our algorithm computers a vector $x$ such that $||{x}-\bar{x}||_A < \epsilon ||\bar{x}||_A $ \footnote{$||\cdot||_A$ denotes the A-norm} in time $${\tilde O}(m\log n \log (1/\epsilon)). The solver utilizes in a standard way a `preconditioning' chain of progressively sparser graphs. To claim the faster running time we make a two-fold improvement in the algorithm for constructing the chain. The new chain exploits previously unknown properties of the graph sparsification algorithm given in [Koutis,Miller,Peng, FOCS 2010], allowing for stronger preconditioning properties. We also present an algorithm of independent interest that constructs nearly-tight low-stretch spanning trees in time $\tilde{O}(m\log{n})$, a factor of $O(\log{n})$ faster than the algorithm in [Abraham,Bartal,Neiman, FOCS 2008]. This speedup directly reflects on the construction time of the preconditioning chain.
  • Balls and Bins: Smaller Hash Families and Faster Evaluation Authors: L. Elisa Celis and Omer Reingold and Gil Segev and Udi Wieder
    A fundamental fact in the analysis of randomized algorithm is that when $n$ balls are hashed into $n$ bins independently and uniformly at random, with high probability each bin contains at most $O(\log n / \log \log n)$ balls. In various applications, however, the assumption that a truly random hash function is available is not always valid, and explicit functions are required. In this paper we study the size of families (or, equivalently, the description length of their functions) that guarantee a maximal load of $O(\log n / \log \log n)$ with high probability, as well as the evaluation time of their functions. Whereas such functions must be described using $\Omega(\log n)$ bits, the best upper bound was formerly $O(\log^2 n / \log \log n)$ bits, which is attained by $O(\log n / \log \log n)$-wise independent functions. Traditional constructions of the latter offer an evaluation time of $O(\log n / \log \log n)$, which according to Siegel's lower bound [FOCS '89] can be reduced only at the cost of significantly increasing the description length. We construct two families that guarantee a maximal load of $O(\log n / \log \log n)$ with high probability. Our constructions are based on two different approaches, and exhibit different trade-offs between the description length and the evaluation time. The first construction shows that $O(\log n / \log \log n)$-wise independence can in fact be replaced by ``gradually increasing independence'', resulting in functions that are described using $O(\log n \log \log n)$ bits and evaluated in time $O(\log n \log \log n)$. The second construction is based on derandomization techniques for space-bounded computations combined with a tailored construction of a pseudorandom generator, resulting in functions that are described using $O(\log^{3/2} n)$ bits and evaluated in time $O(\sqrt{\log n})$. The latter can be compared to Siegel's lower bound stating that $O(\log n / \log \log n)$-wise independent functions that are evaluated in time $O(\sqrt{\log n})$ must be described using $\Omega(2^{\sqrt{\log n}})$ bits.

9B

  • Lexicographic Products and the Power of Non-Linear Network Coding Authors: Anna Blasiak and Robert Kleinberg and Eyal Lubetzky
    We introduce a technique for establishing and amplifying gaps between parameters of network coding and index coding. The technique uses linear programs to establish separations between combinatorial and coding-theoretic parameters and applies hypergraph lexicographic products to amplify these separations. This entails combining the dual solutions of the lexicographic multiplicands and proving that they are a valid dual of the product. Our result is general enough to apply to a large family of linear programs. This blend of linear programs and lexicographic products gives a recipe for constructing hard instances in which the gap between combinatorial or coding-theoretic parameters is polynomially large. We find polynomial gaps in cases in which the largest previously known gaps were only small constant factors or entirely unknown. Most notably, we show a polynomial separation between linear and non-linear network coding rates. This involves exploiting a connection between matroids and index coding to establish a previously unknown separation between linear and non-linear index coding rates. We also construct index coding problems with a polynomial gap between the broadcast rate and the trivial lower bound for which no gap was previously known.
  • Quadratic Goldreich-Levin Theorems Authors: Madhur Tulsiani and Julia Wolf
    Decomposition theorems in classical Fourier analysis enable us to express a bounded function in terms of few linear phases with large Fourier coefficients plus a part that is pseudorandom with respect to linear phases. The Goldreich-Levin algorithm can be viewed as an algorithmic analogue of such a decomposition as it gives a way to efficiently find the linear phases associated with large Fourier coefficients. In the study of "quadratic Fourier analysis", higher-degree analogues of such decompositions have been developed in which the pseudorandomness property is stronger but the structured part correspondingly weaker. For example, it has previously been shown that it is possible to express a bounded function as a sum of a few quadratic phases plus a part that is small in the $U^3$ norm, defined by Gowers for the purpose of counting arithmetic progressions of length 4. We give a polynomial time algorithm for computing such a decomposition. A key part of the algorithm is a local self-correction procedure for Reed-Muller codes of order 2 (over $\F_2^n$) for a function at distance $1/2-\epsilon$ from a codeword. Given a function $f:\F_2^n \to \{-1,1\}$ at fractional Hamming distance $1/2-\epsilon$ from a quadratic phase (which is a codeword of Reed-Muller code of order 2), we give an algorithm that runs in time polynomial in $n$ and finds a codeword at distance at most $1/2-\eta$ for $\eta = \eta(\epsilon)$. This is an algorithmic analogue of Samorodnitsky's result, which gave a tester for the above problem. To our knowledge, it represents the first instance of a correction procedure for any class of codes, beyond the list-decoding radius. In the process, we give algorithmic versions of results from additive combinatorics used in Samorodnitsky's proof and a refined version of the inverse theorem for the Gowers $U^3$ norm over $\F_2^n$.
  • Optimal testing of multivariate polynomials over small prime fields Authors: Elad Haramaty and Amir Shpilka and Madhu Sudan
    We consider the problem of testing if a given function f:F_q^n -> F_q is close to a n-variate degree d polynomial over the finite field F_q of q elements. The natural, low-query, test for this property would be to pick the smallest dimension t = t_{q,d}~ d/q such that every function of degree greater than d reveals this feature on some t-dimensional affine subspace of F_q^n and to test that f when restricted to a random t-dimensional affine subspace is a polynomial of degree at most d on this subspace. Such a test makes only q^t queries, independent of n. Previous works, by Alon et al. (AKKLR), and Kaufman & Ron and Jutla et al., showed that this natural test rejected functions that were \Omega(1)-far from degree d-polynomials with probability at least \Omega(q^{-t}) (the results of Kaufman & Ron hold for all fields F_q, while the results of Jutla et al. hold only for fields of prime order). Thus to get a constant probability of detecting functions that were at constant distance from the space of degree d polynomials, the tests made q^{2t} queries. Kaufman & Ron also noted that when q is prime, then q^t queries are necessary. Thus these tests were off by at least a quadratic factor from known lower bounds. It was unclear if the soundness analysis of these tests were tight and this question relates closely to the task of understanding the behavior of the Gowers Norm. This motivated the work of Bhattacharyya et al., who gave an optimal analysis for the case of the binary field and showed that the natural test actually rejects functions that were \Omega(1)-far from degree d-polynomials with probability at least \Omega(1). In this work we give an optimal analysis of this test for all fields showing that the natural test does indeed reject functions that are \Omega(1)-far from degree $d$ polynomials with \Omega(1)-probability. Our analysis thus shows that this test is optimal (matches known lower bounds) when q is prime. (It is also potentially best possible for all fields.) Our approach extends the proof technique of Bhattacharyya et al., however it has to overcome many technical barriers in the process. The natural extension of their analysis leads to an O(q^d) query complexity, which is worse than that of Kaufman and Ron for all q except 2! The main technical ingredient in our work is a tight analysis of the number of ``hyperplanes'' (affine subspaces of co-dimension $1$) on which the restriction of a degree d polynomial has degree less than $d$. We show that the number of such hyperplanes is at most O(q^{t_{q,d}}) - which is tight to within constant factors.
  • Tight lower bounds for 2-query LCCs over finite fields Authors: Arnab Bhattacharyya and Zeev Dvir and Shubhangi Saraf and Amir Shpilka
    A Locally Correctable Code (LCC) is an error correcting code that has a probabilistic self-correcting algorithm that, with high probability, can correct any coordinate of the codeword by looking at only a few other coordinates, even if a fraction $\delta$ of the coordinates are corrupted. LCC's are a stronger form of LDCs (Locally Decodable Codes) which have received a lot of attention recently due to their many applications and surprising constructions.In this work we show a separation between 2-query LDCs and LCCs over finite fields of prime order. Specifically, we prove a lower bound of the form $p^{\Omega(\delta d)}$ on the length of linear $2$-query LCCs over $\F_p$, that encode messages of length $d$. Our bound improves over the known bound of $2^{\Omega(\delta d)}$ \cite{GKST06,KdW04, DS07} which is tight for LDCs. Our proof makes use of tools from additive combinatorics which have played an important role in several recent results in Theoretical Computer Science. We also obtain, as corollaries of our main theorem, new results in incidence geometry over finite fields. The first is an improvement to the Sylvester-Gallai theorem over finite fields \cite{SS10} and the second is a new analog of Beck's theorem over finite fields.

10A

  • A Two Prover One Round Game with Strong Soundness Authors: Subhash Khot and Muli Safra
    We show that for any fixed prime $q \geq 5$ and constant $\zeta > 0$, it is NP-hard to distinguish whether a two prover one round game with $q^6$ answers has value at least $1-\zeta$ or at most $\frac{4}{q}$. The result is obtained by combining two techniques: (i) An Inner PCP based on the {\it point versus subspace} test for linear functions. The test is analyzed Fourier analytically. (ii) The Outer/Inner PCP composition that relies on a certain {\it sub-code covering} property for Hadamard codes. This is a new and essentially black-box method to translate a {\it codeword test} for Hadamard codes to a {\it consistency test}, leading to a full PCP construction. As an application, we show that unless NP has quasi-polynomial time deterministic algorithms, theQuadratic Programming Problem is inapproximable within factor $(\log n)^{1/6 - o(1)}$.
  • The Randomness Complexity of Parallel Repetition Authors: Kai-Min Chung and Rafael Pass
    Consider a $m$-round interactive protocol with soundness error $1/2$. How much extra randomness is required to decrease the soundness error to $\delta$ through parallel repetition? Previous work shows that for \emph{public-coin} interactive protocols with \emph{unconditional soundness}, $m \cdot O(\log (1/\delta))$ bits of extra randomness suffices. In this work, we initiate a more general study of the above question. \begin{itemize} \item We establish the first derandomized parallel repetition theorem for public-coin interactive protocols with \emph{computational soundness} (a.k.a. arguments). The parameters of our result essentially matches the earlier works in the information-theoretic setting. \item We show that obtaining even a sub-linear dependency on the number of rounds $m$ (i.e., $o(m) \cdot \log(1/\delta)$) in either the information-theoretic or computational settings requires proving $\P \neq \PSPACE$. \item We show that non-trivial derandomized parallel repetition for private-coin protocols is impossible in the information-theoretic setting, and requires proving $\P \neq \PSPACE$ in the computational setting. \end{itemize}
  • Privacy Amplification and Non-Malleable Extractors Via Character Sums Authors: Xin Li and Trevor D. Wooley and David Zuckerman
    In studying how to communicate over a public channel with an active adversary, Dodis and Wichs introduced the notion of a non-malleable extractor. A non-malleable extractor dramatically strengthens the notion of a strong extractor. A strong extractor takes two inputs, a weakly-random $x$ and a uniformly random seed $y$, and outputs a string which appears uniform, even given $y$. For a non-malleable extractor $nmExt$, the output $nmExt(x,y)$ should appear uniform given $y$ as well as $nmExt(x,A(y))$, where $A$ is an arbitrary function with $A(y) \neq y$. We show that an extractor introduced by Chor and Goldreich is non-malleable when the entropy rate is above half. It outputs a linear number of bits when the entropy rate is $1/2 + \alpha$, for any $\alpha>0$. Previously, no nontrivial parameters were known for any non-malleable extractor. To achieve a polynomial running time when outputting many bits, we rely on a widely-believed conjecture about the distribution of prime numbers in arithmetic progressions. Our analysis involves a character sum estimate, which may be of independent interest. Using our non-malleable extractor, we obtain protocols for ``privacy amplification": key agreement between two parties who share a weakly-random secret. Our protocols work in the presence of an active adversary with unlimited computational power, and have optimal entropy loss. When the secret has entropy rate greater than $1/2$, the protocol follows from a result of Dodis and Wichs, and takes two rounds. When the secret has entropy rate $\delta$ for any constant~$\delta>0$, our new protocol takes $O(1)$ rounds. Our protocols run in polynomial time under the above well-known conjecture about primes.
  • Stateless Cryptographic Protocols Authors: Vipul Goyal and Hemanta K. Maji
    Secure computation protocols inherently involve multiple rounds of interaction among the parties where, typically a party has to keep a state about what has happened in the protocol so far and then \emph{wait} for the other party to respond. We study if this is inherent. In particular, we study the possibility of designing cryptographic protocols where the parties can be completely stateless and compute the outgoing message by applying a single fixed function to the incoming message (independent of any state). The problem of designing stateless secure computation protocols can be reduced to the problem of designing protocols satisfying the notion of resettable computation introduced by Canetti, Goldreich, Goldwasser and Micali (FOCS'01) and widely studied thereafter. The current start of art in resettable computation allows for construction of protocols which provide security only when a \emph{single predetermined} party is resettable. An exception is for the case of the zero-knowledge functionality for which a protocol in which both parties are resettable was recently obtained by Deng, Goyal and Sahai (FOCS'09). The fundamental question left open in this sequence of works is, whether fully-resettable computation is possible, when: \begin{enumerate} \item An adversary can corrupt any number of parties, and \item The adversary can reset any party to its original state during the execution of the protocol and can restart the protocol. \end{enumerate} In this paper, we resolve the above problem by constructing secure protocols realizing \emph{any} efficiently computable multi-party functionality in the plain model under standard cryptographic assumptions. First, we construct a Fully-Resettable Simulation Sound Zero-Knowledge (ss-rs-rZK) protocol. Next, based on these ss-rs-rZK protocols, we show how to compile any semi-honest secure protocol into a protocol secure against fully resetting adversaries. Next, we study a seemingly unrelated open question: ``Does there exist a functionality which, in the concurrent setting, is impossible to securely realize using BB simulation but can be realized using NBB simulation?". We resolve the above question in the affirmative by giving an example of such a (reactive) functionality. Somewhat surprisingly, this is done by making a connection to the existence of a fully resettable simulation sound zero-knowledge protocol.
  • How to Store a Secret on Continually Leaky Devices Authors: Yevgeniy Dodis and Allison Lewko and Brent Waters and Daniel Wichs
    We consider the question of how to store a value secretly on devices that continually leak information about their internal state to an external attacker. If the secret value is stored on a single device, and the attacker can leak even a single predicate of the internal state of that device, then she may learn some information about the secret value itself. Therefore, we consider a setting where the secret value is shared between multiple devices (or multiple components of one device), each of which continually leaks arbitrary adaptively chosen predicates of its individual state. Since leakage is continual, each device must also continually update its state so that an attacker cannot just leak it entirely one bit at a time. In our model, the devices update their state individually and asynchronously, without any communication between them. The update process is necessarily randomized, and its randomness can leak as well. As our main result, we construct a sharing scheme for two devices, where a constant fraction of the internal state of each device can leak in between and during updates. Our scheme has the structure of a public-key encryption, where one share is a secret key and the other is a ciphertext. As a contribution of independent interest, we also get public-key encryption in the continual leakage model, introduced by Brakerski et al. and Dodis et al. (FOCS '10). This scheme tolerates continual leakage on the secret key and the updates, and simplies the recent construction of Lewko, Lewko and Waters (STOC '11). For our main result, we also show how to update the ciphertexts of the encryption scheme so that the message remains hidden even if an attacker interleaves leakage on secret key and ciphertext shares. The security of our scheme is based on the linear assumption in prime-order bilinear groups. We also provide an extension to general access structures realizable by linear secret sharing schemes across many devices. The main advantage of this extension is that the state of some devices can be compromised entirely, while that of the all remaining devices is susceptible to continual leakage. Lastly, we show impossibility of information theoretic sharing schemes in our model, where continually leaky devices update their state individually.