TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

9B

  • Lexicographic Products and the Power of Non-Linear Network Coding Authors: Anna Blasiak and Robert Kleinberg and Eyal Lubetzky
    We introduce a technique for establishing and amplifying gaps between parameters of network coding and index coding. The technique uses linear programs to establish separations between combinatorial and coding-theoretic parameters and applies hypergraph lexicographic products to amplify these separations. This entails combining the dual solutions of the lexicographic multiplicands and proving that they are a valid dual of the product. Our result is general enough to apply to a large family of linear programs. This blend of linear programs and lexicographic products gives a recipe for constructing hard instances in which the gap between combinatorial or coding-theoretic parameters is polynomially large. We find polynomial gaps in cases in which the largest previously known gaps were only small constant factors or entirely unknown. Most notably, we show a polynomial separation between linear and non-linear network coding rates. This involves exploiting a connection between matroids and index coding to establish a previously unknown separation between linear and non-linear index coding rates. We also construct index coding problems with a polynomial gap between the broadcast rate and the trivial lower bound for which no gap was previously known.
  • Quadratic Goldreich-Levin Theorems Authors: Madhur Tulsiani and Julia Wolf
    Decomposition theorems in classical Fourier analysis enable us to express a bounded function in terms of few linear phases with large Fourier coefficients plus a part that is pseudorandom with respect to linear phases. The Goldreich-Levin algorithm can be viewed as an algorithmic analogue of such a decomposition as it gives a way to efficiently find the linear phases associated with large Fourier coefficients. In the study of "quadratic Fourier analysis", higher-degree analogues of such decompositions have been developed in which the pseudorandomness property is stronger but the structured part correspondingly weaker. For example, it has previously been shown that it is possible to express a bounded function as a sum of a few quadratic phases plus a part that is small in the $U^3$ norm, defined by Gowers for the purpose of counting arithmetic progressions of length 4. We give a polynomial time algorithm for computing such a decomposition. A key part of the algorithm is a local self-correction procedure for Reed-Muller codes of order 2 (over $\F_2^n$) for a function at distance $1/2-\epsilon$ from a codeword. Given a function $f:\F_2^n \to \{-1,1\}$ at fractional Hamming distance $1/2-\epsilon$ from a quadratic phase (which is a codeword of Reed-Muller code of order 2), we give an algorithm that runs in time polynomial in $n$ and finds a codeword at distance at most $1/2-\eta$ for $\eta = \eta(\epsilon)$. This is an algorithmic analogue of Samorodnitsky's result, which gave a tester for the above problem. To our knowledge, it represents the first instance of a correction procedure for any class of codes, beyond the list-decoding radius. In the process, we give algorithmic versions of results from additive combinatorics used in Samorodnitsky's proof and a refined version of the inverse theorem for the Gowers $U^3$ norm over $\F_2^n$.
  • Optimal testing of multivariate polynomials over small prime fields Authors: Elad Haramaty and Amir Shpilka and Madhu Sudan
    We consider the problem of testing if a given function f:F_q^n -> F_q is close to a n-variate degree d polynomial over the finite field F_q of q elements. The natural, low-query, test for this property would be to pick the smallest dimension t = t_{q,d}~ d/q such that every function of degree greater than d reveals this feature on some t-dimensional affine subspace of F_q^n and to test that f when restricted to a random t-dimensional affine subspace is a polynomial of degree at most d on this subspace. Such a test makes only q^t queries, independent of n. Previous works, by Alon et al. (AKKLR), and Kaufman & Ron and Jutla et al., showed that this natural test rejected functions that were \Omega(1)-far from degree d-polynomials with probability at least \Omega(q^{-t}) (the results of Kaufman & Ron hold for all fields F_q, while the results of Jutla et al. hold only for fields of prime order). Thus to get a constant probability of detecting functions that were at constant distance from the space of degree d polynomials, the tests made q^{2t} queries. Kaufman & Ron also noted that when q is prime, then q^t queries are necessary. Thus these tests were off by at least a quadratic factor from known lower bounds. It was unclear if the soundness analysis of these tests were tight and this question relates closely to the task of understanding the behavior of the Gowers Norm. This motivated the work of Bhattacharyya et al., who gave an optimal analysis for the case of the binary field and showed that the natural test actually rejects functions that were \Omega(1)-far from degree d-polynomials with probability at least \Omega(1). In this work we give an optimal analysis of this test for all fields showing that the natural test does indeed reject functions that are \Omega(1)-far from degree $d$ polynomials with \Omega(1)-probability. Our analysis thus shows that this test is optimal (matches known lower bounds) when q is prime. (It is also potentially best possible for all fields.) Our approach extends the proof technique of Bhattacharyya et al., however it has to overcome many technical barriers in the process. The natural extension of their analysis leads to an O(q^d) query complexity, which is worse than that of Kaufman and Ron for all q except 2! The main technical ingredient in our work is a tight analysis of the number of ``hyperplanes'' (affine subspaces of co-dimension $1$) on which the restriction of a degree d polynomial has degree less than $d$. We show that the number of such hyperplanes is at most O(q^{t_{q,d}}) - which is tight to within constant factors.
  • Tight lower bounds for 2-query LCCs over finite fields Authors: Arnab Bhattacharyya and Zeev Dvir and Shubhangi Saraf and Amir Shpilka
    A Locally Correctable Code (LCC) is an error correcting code that has a probabilistic self-correcting algorithm that, with high probability, can correct any coordinate of the codeword by looking at only a few other coordinates, even if a fraction $\delta$ of the coordinates are corrupted. LCC's are a stronger form of LDCs (Locally Decodable Codes) which have received a lot of attention recently due to their many applications and surprising constructions.In this work we show a separation between 2-query LDCs and LCCs over finite fields of prime order. Specifically, we prove a lower bound of the form $p^{\Omega(\delta d)}$ on the length of linear $2$-query LCCs over $\F_p$, that encode messages of length $d$. Our bound improves over the known bound of $2^{\Omega(\delta d)}$ \cite{GKST06,KdW04, DS07} which is tight for LDCs. Our proof makes use of tools from additive combinatorics which have played an important role in several recent results in Theoretical Computer Science. We also obtain, as corollaries of our main theorem, new results in incidence geometry over finite fields. The first is an improvement to the Sylvester-Gallai theorem over finite fields \cite{SS10} and the second is a new analog of Beck's theorem over finite fields.

10A

  • A Two Prover One Round Game with Strong Soundness Authors: Subhash Khot and Muli Safra
    We show that for any fixed prime $q \geq 5$ and constant $\zeta > 0$, it is NP-hard to distinguish whether a two prover one round game with $q^6$ answers has value at least $1-\zeta$ or at most $\frac{4}{q}$. The result is obtained by combining two techniques: (i) An Inner PCP based on the {\it point versus subspace} test for linear functions. The test is analyzed Fourier analytically. (ii) The Outer/Inner PCP composition that relies on a certain {\it sub-code covering} property for Hadamard codes. This is a new and essentially black-box method to translate a {\it codeword test} for Hadamard codes to a {\it consistency test}, leading to a full PCP construction. As an application, we show that unless NP has quasi-polynomial time deterministic algorithms, theQuadratic Programming Problem is inapproximable within factor $(\log n)^{1/6 - o(1)}$.
  • The Randomness Complexity of Parallel Repetition Authors: Kai-Min Chung and Rafael Pass
    Consider a $m$-round interactive protocol with soundness error $1/2$. How much extra randomness is required to decrease the soundness error to $\delta$ through parallel repetition? Previous work shows that for \emph{public-coin} interactive protocols with \emph{unconditional soundness}, $m \cdot O(\log (1/\delta))$ bits of extra randomness suffices. In this work, we initiate a more general study of the above question. \begin{itemize} \item We establish the first derandomized parallel repetition theorem for public-coin interactive protocols with \emph{computational soundness} (a.k.a. arguments). The parameters of our result essentially matches the earlier works in the information-theoretic setting. \item We show that obtaining even a sub-linear dependency on the number of rounds $m$ (i.e., $o(m) \cdot \log(1/\delta)$) in either the information-theoretic or computational settings requires proving $\P \neq \PSPACE$. \item We show that non-trivial derandomized parallel repetition for private-coin protocols is impossible in the information-theoretic setting, and requires proving $\P \neq \PSPACE$ in the computational setting. \end{itemize}
  • Privacy Amplification and Non-Malleable Extractors Via Character Sums Authors: Xin Li and Trevor D. Wooley and David Zuckerman
    In studying how to communicate over a public channel with an active adversary, Dodis and Wichs introduced the notion of a non-malleable extractor. A non-malleable extractor dramatically strengthens the notion of a strong extractor. A strong extractor takes two inputs, a weakly-random $x$ and a uniformly random seed $y$, and outputs a string which appears uniform, even given $y$. For a non-malleable extractor $nmExt$, the output $nmExt(x,y)$ should appear uniform given $y$ as well as $nmExt(x,A(y))$, where $A$ is an arbitrary function with $A(y) \neq y$. We show that an extractor introduced by Chor and Goldreich is non-malleable when the entropy rate is above half. It outputs a linear number of bits when the entropy rate is $1/2 + \alpha$, for any $\alpha>0$. Previously, no nontrivial parameters were known for any non-malleable extractor. To achieve a polynomial running time when outputting many bits, we rely on a widely-believed conjecture about the distribution of prime numbers in arithmetic progressions. Our analysis involves a character sum estimate, which may be of independent interest. Using our non-malleable extractor, we obtain protocols for ``privacy amplification": key agreement between two parties who share a weakly-random secret. Our protocols work in the presence of an active adversary with unlimited computational power, and have optimal entropy loss. When the secret has entropy rate greater than $1/2$, the protocol follows from a result of Dodis and Wichs, and takes two rounds. When the secret has entropy rate $\delta$ for any constant~$\delta>0$, our new protocol takes $O(1)$ rounds. Our protocols run in polynomial time under the above well-known conjecture about primes.
  • Stateless Cryptographic Protocols Authors: Vipul Goyal and Hemanta K. Maji
    Secure computation protocols inherently involve multiple rounds of interaction among the parties where, typically a party has to keep a state about what has happened in the protocol so far and then \emph{wait} for the other party to respond. We study if this is inherent. In particular, we study the possibility of designing cryptographic protocols where the parties can be completely stateless and compute the outgoing message by applying a single fixed function to the incoming message (independent of any state). The problem of designing stateless secure computation protocols can be reduced to the problem of designing protocols satisfying the notion of resettable computation introduced by Canetti, Goldreich, Goldwasser and Micali (FOCS'01) and widely studied thereafter. The current start of art in resettable computation allows for construction of protocols which provide security only when a \emph{single predetermined} party is resettable. An exception is for the case of the zero-knowledge functionality for which a protocol in which both parties are resettable was recently obtained by Deng, Goyal and Sahai (FOCS'09). The fundamental question left open in this sequence of works is, whether fully-resettable computation is possible, when: \begin{enumerate} \item An adversary can corrupt any number of parties, and \item The adversary can reset any party to its original state during the execution of the protocol and can restart the protocol. \end{enumerate} In this paper, we resolve the above problem by constructing secure protocols realizing \emph{any} efficiently computable multi-party functionality in the plain model under standard cryptographic assumptions. First, we construct a Fully-Resettable Simulation Sound Zero-Knowledge (ss-rs-rZK) protocol. Next, based on these ss-rs-rZK protocols, we show how to compile any semi-honest secure protocol into a protocol secure against fully resetting adversaries. Next, we study a seemingly unrelated open question: ``Does there exist a functionality which, in the concurrent setting, is impossible to securely realize using BB simulation but can be realized using NBB simulation?". We resolve the above question in the affirmative by giving an example of such a (reactive) functionality. Somewhat surprisingly, this is done by making a connection to the existence of a fully resettable simulation sound zero-knowledge protocol.
  • How to Store a Secret on Continually Leaky Devices Authors: Yevgeniy Dodis and Allison Lewko and Brent Waters and Daniel Wichs
    We consider the question of how to store a value secretly on devices that continually leak information about their internal state to an external attacker. If the secret value is stored on a single device, and the attacker can leak even a single predicate of the internal state of that device, then she may learn some information about the secret value itself. Therefore, we consider a setting where the secret value is shared between multiple devices (or multiple components of one device), each of which continually leaks arbitrary adaptively chosen predicates of its individual state. Since leakage is continual, each device must also continually update its state so that an attacker cannot just leak it entirely one bit at a time. In our model, the devices update their state individually and asynchronously, without any communication between them. The update process is necessarily randomized, and its randomness can leak as well. As our main result, we construct a sharing scheme for two devices, where a constant fraction of the internal state of each device can leak in between and during updates. Our scheme has the structure of a public-key encryption, where one share is a secret key and the other is a ciphertext. As a contribution of independent interest, we also get public-key encryption in the continual leakage model, introduced by Brakerski et al. and Dodis et al. (FOCS '10). This scheme tolerates continual leakage on the secret key and the updates, and simplies the recent construction of Lewko, Lewko and Waters (STOC '11). For our main result, we also show how to update the ciphertexts of the encryption scheme so that the message remains hidden even if an attacker interleaves leakage on secret key and ciphertext shares. The security of our scheme is based on the linear assumption in prime-order bilinear groups. We also provide an extension to general access structures realizable by linear secret sharing schemes across many devices. The main advantage of this extension is that the state of some devices can be compromised entirely, while that of the all remaining devices is susceptible to continual leakage. Lastly, we show impossibility of information theoretic sharing schemes in our model, where continually leaky devices update their state individually.

10B

  • Efficient Distributed Medium Access Authors: Devavrat Shah and Jinwoo Shin and Prasad Tetali
    Consider a wireless network of n nodes represented by a graph G=(V, E) where an edge (i,j) models the fact that transmissions of i and j interfere with each other, i.e. simultaneous transmissions of i and j become unsuccessful. Hence it is required that at each time instance a set of non-interfering nodes (corresponding to an independent set in G) access the wireless medium. To utilize wireless resources efficiently, it is required to arbitrate the access of medium among interfering nodes properly. Moreover, to be of practical use, such a mechanism is required to be totally distributed as well as simple.As the main result of this paper, we provide such a medium access algorithm. It is randomized, totally distributed and simple: each node attempts to access medium at each time with probability that is a function of its local information. We establish efficiency of the algorithm by showing that the corresponding network Markov chain is positive recurrent as long as the demand imposed on the network can be supported by the wireless network (using any algorithm). In that sense, the proposed algorithm is optimal in terms of utilizing wireless resources. The algorithm is oblivious to the network graph structure, in contrast with the so-called `polynomial back-off' algorithm by Hastad-Leighton-Rogoff (STOC '87, SICOMP '96) that is established to be optimal for the complete graph and bipartite graphs (by Goldberg-MacKenzie (SODA '96, JCSS '99)).
  • Local Distributed Decision Authors: Pierre Fraigniaud and Amos Korman and David Peleg
    A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard $\cal{LOCAL}$ model of computation and define $LD(t)$ (for local decision) as the class of decision problems that can be solved in $t$ communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class $BPLD(t,p,q)$, containing all languages for which there exists a randomized algorithm that runs in $t$ rounds, accepts correct instances with probability at least $p$ and rejects incorrect ones with probability at least $q$. We show that $p^2+q = 1$ is a threshold for the containment of $LD(t)$ in $BPLD(t,p,q)$. More precisely, we show that there exists a language that does not belong to $LD(t)$ for any $t=o(n)$ but does belong to $BPLD(0,p,q)$ for any $p,q\in (0,1]$ such that $p^2+q\leq 1$. On the other hand, we show that, restricted to hereditary languages, $BPLD(t,p,q)=LD(O(t))$, for any function $t$ and any $p,q\in (0,1]$ such that $p^2+q> 1$. In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results.
  • The Complexity of Renaming Authors: Dan Alistarh and James Aspnes and Seth Gilbert and Rachid Guerraoui
    We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove a local lower bound of \Omega(k) process steps for deterministic renaming into any namespace of size sub-exponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies tight bounds for deterministic fetch-and-increment registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our local bound with a global lower bound of \Omega(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c \geq 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetch-and-increment implementations, all tight within logarithmic factors.
  • Mutual Exclusion with O(log2 log n) Amortized Work Authors: Michael A. Bender and Seth Gilbert
    This paper gives a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log2 log n) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, shared-memory model. The algorithm works against an oblivious adversary and guarantees that every process enters the critical section with high probability. The algorithm achieves its efficient performance by exploiting a connection between mutual exclusion and approximate counting. A central aspect of the work presented here is the development and application of efficient approximate-counting data structures. Our mutual-exclusion algorithm represents a departure from previous algorithms in terms of techniques, adversary model, and performance. Most previous mutual exclusion algorithms are based on tournament-tree constructions. The most efficient prior algorithms require O(log n/ log log n) RMRs and work against an adaptive adversary. In this paper, we focus on an oblivious model, and the algorithm in this paper is the first (for any adversary model) that can beat the O(log n/ log log n) RMR bound.
  • Algorithms for the Generalized Sorting Problem Authors: Zhiyi Huang and Sampath Kannan and Sanjeev Khanna
    We study the generalized sorting problem where we are given a set of $n$ elements to be sorted but only a subset of all possible pairwise element comparisons is allowed. The goal is to determine the sorted order using the smallest possible number of allowed comparisons. The generalized sorting problem may be equivalently viewed as follows. Given an undirected graph $G(V,E)$ where $V$ is the set of elements to be sorted and $E$ defines the set of allowed comparisons, adaptively find the smallest subset $E' \subseteq E$ of edges to probe such that the directed graph induced by $E'$ contains a Hamiltonian path. When $G$ is a complete graph, we get the standard sorting problem, and it is well-known that $\Theta(n \log n)$ comparisons are necessary and sufficient. An extensively studied special case of the generalized sorting problem is the nuts and bolts problem where the allowed comparison graph is a complete bipartite graph between two equal-size sets. It is known that for this special case also, there is a deterministic algorithm that sorts using $\Theta(n \log n)$ comparisons. However, when the allowed comparison graph is arbitrary, to our knowledge, no bound better than the trivial $O(n^2)$ bound is known. Our main result is a randomized algorithm that sorts any allowed comparison graph using $\wt{O}(n^{3/2})$ comparisons with high probability (provided the input is sortable). We also study the sorting problem in randomly generated allowed comparison graphs, and show that when the edge probability is $p$, $\wt{O}(\min\{\frac{n}{p^2},n^{3/2} \sqrt{p}\})$ comparisons suffice on average to sort.