TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

10A

  • A Two Prover One Round Game with Strong Soundness Authors: Subhash Khot and Muli Safra
    We show that for any fixed prime $q \geq 5$ and constant $\zeta > 0$, it is NP-hard to distinguish whether a two prover one round game with $q^6$ answers has value at least $1-\zeta$ or at most $\frac{4}{q}$. The result is obtained by combining two techniques: (i) An Inner PCP based on the {\it point versus subspace} test for linear functions. The test is analyzed Fourier analytically. (ii) The Outer/Inner PCP composition that relies on a certain {\it sub-code covering} property for Hadamard codes. This is a new and essentially black-box method to translate a {\it codeword test} for Hadamard codes to a {\it consistency test}, leading to a full PCP construction. As an application, we show that unless NP has quasi-polynomial time deterministic algorithms, theQuadratic Programming Problem is inapproximable within factor $(\log n)^{1/6 - o(1)}$.
  • The Randomness Complexity of Parallel Repetition Authors: Kai-Min Chung and Rafael Pass
    Consider a $m$-round interactive protocol with soundness error $1/2$. How much extra randomness is required to decrease the soundness error to $\delta$ through parallel repetition? Previous work shows that for \emph{public-coin} interactive protocols with \emph{unconditional soundness}, $m \cdot O(\log (1/\delta))$ bits of extra randomness suffices. In this work, we initiate a more general study of the above question. \begin{itemize} \item We establish the first derandomized parallel repetition theorem for public-coin interactive protocols with \emph{computational soundness} (a.k.a. arguments). The parameters of our result essentially matches the earlier works in the information-theoretic setting. \item We show that obtaining even a sub-linear dependency on the number of rounds $m$ (i.e., $o(m) \cdot \log(1/\delta)$) in either the information-theoretic or computational settings requires proving $\P \neq \PSPACE$. \item We show that non-trivial derandomized parallel repetition for private-coin protocols is impossible in the information-theoretic setting, and requires proving $\P \neq \PSPACE$ in the computational setting. \end{itemize}
  • Privacy Amplification and Non-Malleable Extractors Via Character Sums Authors: Xin Li and Trevor D. Wooley and David Zuckerman
    In studying how to communicate over a public channel with an active adversary, Dodis and Wichs introduced the notion of a non-malleable extractor. A non-malleable extractor dramatically strengthens the notion of a strong extractor. A strong extractor takes two inputs, a weakly-random $x$ and a uniformly random seed $y$, and outputs a string which appears uniform, even given $y$. For a non-malleable extractor $nmExt$, the output $nmExt(x,y)$ should appear uniform given $y$ as well as $nmExt(x,A(y))$, where $A$ is an arbitrary function with $A(y) \neq y$. We show that an extractor introduced by Chor and Goldreich is non-malleable when the entropy rate is above half. It outputs a linear number of bits when the entropy rate is $1/2 + \alpha$, for any $\alpha>0$. Previously, no nontrivial parameters were known for any non-malleable extractor. To achieve a polynomial running time when outputting many bits, we rely on a widely-believed conjecture about the distribution of prime numbers in arithmetic progressions. Our analysis involves a character sum estimate, which may be of independent interest. Using our non-malleable extractor, we obtain protocols for ``privacy amplification": key agreement between two parties who share a weakly-random secret. Our protocols work in the presence of an active adversary with unlimited computational power, and have optimal entropy loss. When the secret has entropy rate greater than $1/2$, the protocol follows from a result of Dodis and Wichs, and takes two rounds. When the secret has entropy rate $\delta$ for any constant~$\delta>0$, our new protocol takes $O(1)$ rounds. Our protocols run in polynomial time under the above well-known conjecture about primes.
  • Stateless Cryptographic Protocols Authors: Vipul Goyal and Hemanta K. Maji
    Secure computation protocols inherently involve multiple rounds of interaction among the parties where, typically a party has to keep a state about what has happened in the protocol so far and then \emph{wait} for the other party to respond. We study if this is inherent. In particular, we study the possibility of designing cryptographic protocols where the parties can be completely stateless and compute the outgoing message by applying a single fixed function to the incoming message (independent of any state). The problem of designing stateless secure computation protocols can be reduced to the problem of designing protocols satisfying the notion of resettable computation introduced by Canetti, Goldreich, Goldwasser and Micali (FOCS'01) and widely studied thereafter. The current start of art in resettable computation allows for construction of protocols which provide security only when a \emph{single predetermined} party is resettable. An exception is for the case of the zero-knowledge functionality for which a protocol in which both parties are resettable was recently obtained by Deng, Goyal and Sahai (FOCS'09). The fundamental question left open in this sequence of works is, whether fully-resettable computation is possible, when: \begin{enumerate} \item An adversary can corrupt any number of parties, and \item The adversary can reset any party to its original state during the execution of the protocol and can restart the protocol. \end{enumerate} In this paper, we resolve the above problem by constructing secure protocols realizing \emph{any} efficiently computable multi-party functionality in the plain model under standard cryptographic assumptions. First, we construct a Fully-Resettable Simulation Sound Zero-Knowledge (ss-rs-rZK) protocol. Next, based on these ss-rs-rZK protocols, we show how to compile any semi-honest secure protocol into a protocol secure against fully resetting adversaries. Next, we study a seemingly unrelated open question: ``Does there exist a functionality which, in the concurrent setting, is impossible to securely realize using BB simulation but can be realized using NBB simulation?". We resolve the above question in the affirmative by giving an example of such a (reactive) functionality. Somewhat surprisingly, this is done by making a connection to the existence of a fully resettable simulation sound zero-knowledge protocol.
  • How to Store a Secret on Continually Leaky Devices Authors: Yevgeniy Dodis and Allison Lewko and Brent Waters and Daniel Wichs
    We consider the question of how to store a value secretly on devices that continually leak information about their internal state to an external attacker. If the secret value is stored on a single device, and the attacker can leak even a single predicate of the internal state of that device, then she may learn some information about the secret value itself. Therefore, we consider a setting where the secret value is shared between multiple devices (or multiple components of one device), each of which continually leaks arbitrary adaptively chosen predicates of its individual state. Since leakage is continual, each device must also continually update its state so that an attacker cannot just leak it entirely one bit at a time. In our model, the devices update their state individually and asynchronously, without any communication between them. The update process is necessarily randomized, and its randomness can leak as well. As our main result, we construct a sharing scheme for two devices, where a constant fraction of the internal state of each device can leak in between and during updates. Our scheme has the structure of a public-key encryption, where one share is a secret key and the other is a ciphertext. As a contribution of independent interest, we also get public-key encryption in the continual leakage model, introduced by Brakerski et al. and Dodis et al. (FOCS '10). This scheme tolerates continual leakage on the secret key and the updates, and simplies the recent construction of Lewko, Lewko and Waters (STOC '11). For our main result, we also show how to update the ciphertexts of the encryption scheme so that the message remains hidden even if an attacker interleaves leakage on secret key and ciphertext shares. The security of our scheme is based on the linear assumption in prime-order bilinear groups. We also provide an extension to general access structures realizable by linear secret sharing schemes across many devices. The main advantage of this extension is that the state of some devices can be compromised entirely, while that of the all remaining devices is susceptible to continual leakage. Lastly, we show impossibility of information theoretic sharing schemes in our model, where continually leaky devices update their state individually.

10B

  • Efficient Distributed Medium Access Authors: Devavrat Shah and Jinwoo Shin and Prasad Tetali
    Consider a wireless network of n nodes represented by a graph G=(V, E) where an edge (i,j) models the fact that transmissions of i and j interfere with each other, i.e. simultaneous transmissions of i and j become unsuccessful. Hence it is required that at each time instance a set of non-interfering nodes (corresponding to an independent set in G) access the wireless medium. To utilize wireless resources efficiently, it is required to arbitrate the access of medium among interfering nodes properly. Moreover, to be of practical use, such a mechanism is required to be totally distributed as well as simple.As the main result of this paper, we provide such a medium access algorithm. It is randomized, totally distributed and simple: each node attempts to access medium at each time with probability that is a function of its local information. We establish efficiency of the algorithm by showing that the corresponding network Markov chain is positive recurrent as long as the demand imposed on the network can be supported by the wireless network (using any algorithm). In that sense, the proposed algorithm is optimal in terms of utilizing wireless resources. The algorithm is oblivious to the network graph structure, in contrast with the so-called `polynomial back-off' algorithm by Hastad-Leighton-Rogoff (STOC '87, SICOMP '96) that is established to be optimal for the complete graph and bipartite graphs (by Goldberg-MacKenzie (SODA '96, JCSS '99)).
  • Local Distributed Decision Authors: Pierre Fraigniaud and Amos Korman and David Peleg
    A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard $\cal{LOCAL}$ model of computation and define $LD(t)$ (for local decision) as the class of decision problems that can be solved in $t$ communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class $BPLD(t,p,q)$, containing all languages for which there exists a randomized algorithm that runs in $t$ rounds, accepts correct instances with probability at least $p$ and rejects incorrect ones with probability at least $q$. We show that $p^2+q = 1$ is a threshold for the containment of $LD(t)$ in $BPLD(t,p,q)$. More precisely, we show that there exists a language that does not belong to $LD(t)$ for any $t=o(n)$ but does belong to $BPLD(0,p,q)$ for any $p,q\in (0,1]$ such that $p^2+q\leq 1$. On the other hand, we show that, restricted to hereditary languages, $BPLD(t,p,q)=LD(O(t))$, for any function $t$ and any $p,q\in (0,1]$ such that $p^2+q> 1$. In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results.
  • The Complexity of Renaming Authors: Dan Alistarh and James Aspnes and Seth Gilbert and Rachid Guerraoui
    We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove a local lower bound of \Omega(k) process steps for deterministic renaming into any namespace of size sub-exponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies tight bounds for deterministic fetch-and-increment registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our local bound with a global lower bound of \Omega(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c \geq 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetch-and-increment implementations, all tight within logarithmic factors.
  • Mutual Exclusion with O(log2 log n) Amortized Work Authors: Michael A. Bender and Seth Gilbert
    This paper gives a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log2 log n) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, shared-memory model. The algorithm works against an oblivious adversary and guarantees that every process enters the critical section with high probability. The algorithm achieves its efficient performance by exploiting a connection between mutual exclusion and approximate counting. A central aspect of the work presented here is the development and application of efficient approximate-counting data structures. Our mutual-exclusion algorithm represents a departure from previous algorithms in terms of techniques, adversary model, and performance. Most previous mutual exclusion algorithms are based on tournament-tree constructions. The most efficient prior algorithms require O(log n/ log log n) RMRs and work against an adaptive adversary. In this paper, we focus on an oblivious model, and the algorithm in this paper is the first (for any adversary model) that can beat the O(log n/ log log n) RMR bound.
  • Algorithms for the Generalized Sorting Problem Authors: Zhiyi Huang and Sampath Kannan and Sanjeev Khanna
    We study the generalized sorting problem where we are given a set of $n$ elements to be sorted but only a subset of all possible pairwise element comparisons is allowed. The goal is to determine the sorted order using the smallest possible number of allowed comparisons. The generalized sorting problem may be equivalently viewed as follows. Given an undirected graph $G(V,E)$ where $V$ is the set of elements to be sorted and $E$ defines the set of allowed comparisons, adaptively find the smallest subset $E' \subseteq E$ of edges to probe such that the directed graph induced by $E'$ contains a Hamiltonian path. When $G$ is a complete graph, we get the standard sorting problem, and it is well-known that $\Theta(n \log n)$ comparisons are necessary and sufficient. An extensively studied special case of the generalized sorting problem is the nuts and bolts problem where the allowed comparison graph is a complete bipartite graph between two equal-size sets. It is known that for this special case also, there is a deterministic algorithm that sorts using $\Theta(n \log n)$ comparisons. However, when the allowed comparison graph is arbitrary, to our knowledge, no bound better than the trivial $O(n^2)$ bound is known. Our main result is a randomized algorithm that sorts any allowed comparison graph using $\wt{O}(n^{3/2})$ comparisons with high probability (provided the input is sortable). We also study the sorting problem in randomly generated allowed comparison graphs, and show that when the edge probability is $p$, $\wt{O}(\min\{\frac{n}{p^2},n^{3/2} \sqrt{p}\})$ comparisons suffice on average to sort.

11A

  • Information Equals Amortized Communication Authors: Mark Braverman and Anup Rao
    We show how to efficiently simulate the sending of a message $M$ to a receiver who has partial information about the message, so that the expected number of bits communicated in the simulation is close to the amount of additional information that the message reveals to the receiver. This is a generalization and strengthening of the Slepian-Wolf theorem, which shows how to carry out such a simulation with low \emph{amortized} communication in the case that $M$ is a deterministic function of $X$. A caveat is that our simulation is interactive. As a consequence, we obtain new relationships between the randomized amortized communication complexity of a function, and its information complexity. We prove that for any given distribution on inputs, the internal information cost (namely the information revealed to the parties) involved in computing any relation or function using a two party interactive protocol is {\em exactly} equal to the amortized communication complexity of computing independent copies of the same relation or function. Here by amortized communication complexity we mean the average per copy communication in the best protocol for computing multiple copies, with a bound on the error in each copy (i.e.\ we require only that the output in each coordinate is correct with good probability, and do not require that all outputs are simultaneously correct). This significantly simplifies the relationships between the various measures of complexity for average case communication protocols, and proves that if a function's information cost is smaller than its communication complexity, then multiple copies of the function can be computed more efficiently in parallel than sequentially. Finally, we show that the only way to prove a strong direct sum theorem for randomized communication complexity is by solving a particular variant of the pointer jumping problem that we define. If this problem has a cheap communication protocol, then a strong direct sum theorem must hold. On the other hand, if it does not, then the problem itself gives a counterexample for the direct sum question. In the process we show that a strong direct sum theorem for communication complexity holds if and only if efficient compression of communication protocols is possible.
  • Delays and the Capacity of Continuous-time Channels Authors: Sanjeev Khanna and Madhu Sudan
    Any physical channel of communication offers two potential reasons why its capacity (the number of bits it can transmit in a unit of time) might be unbounded: (1) (Uncountably) infinitely many choices of signal strength at any given time, and (2) (Uncountably) infinitely many instances of time at which signals may be sent. However channel noise cancels out the potential unboundedness of the first aspect, leaving typical channels with only a finite capacity per instant of time. The latter source of infinity seems less extensively studied. A potential source of unreliability that might restrict the capacity also from the second aspect is ``delay'': Signals transmitted by the sender at a given point of time may not be received with a predictable delay at the receiving end. In this work we examine this source of uncertainty by considering a simple discrete model of delay errors. In our model the communicating parties get to subdivide time as finely as they wish, but still have to cope with communication delays that are variable. The continuous process becomes the limit of our process as the time subdivision becomes infinitesimal. We analyze the limits of such channels and reach somewhat surprising conclusions: The capacity of a physical channel is finitely bounded only if at least one of the two sources of error (signal noise or delay noise) is adversarial. If both error sources are stochastic, or the adversarial source is noise that is independent of the stochastic delay, the capacity of the associated physical channel is infinite!
  • Efficient and Explicit Coding for Interactive Communication Authors: Ran Gelles and Ankur Moitra and Amit Sahai
    In this work, we study the fundamental problem of reliable interactive communication over a noisy channel. In a breakthrough sequence of papers published in 1992 and 1993, Schulman gave non-constructive proofs of the existence of general methods to emulate any two-party interactive protocol such that: (1) the emulation protocol only takes a constant-factor longer than the original protocol, and (2) if the emulation protocol is executed over any discrete memoryless noisy channel with constant capacity, then the probability that the emulation protocol fails to perfectly emulate the original protocol is exponentially small in the total length of the protocol. Unfortunately, Schulman's emulation procedures either only work in a nonstandard model with a large amount of shared randomness, or are non-constructive in that they rely on the existence of "absolute" tree codes. The only known proofs of the existence of absolute tree codes are non-constructive, and finding an explicit construction remains an important open problem. Indeed, randomly generated tree codes are not absolute tree codes with overwhelming probability. In this work, we revisit the problem of reliable interactive communication, and obtain the first fully explicit (randomized) efficient constant-rate emulation procedure for reliable interactive communication. Our protocol works for any discrete memoryless noisy channel with constant capacity, and our protocol's probability of failure is exponentially small in the total length of the protocol. We accomplish this goal by obtaining the following results: We introduce a new notion of goodness for a tree code, and define the notion of a potent tree code. We believe that this notion is of independent interest. We prove the correctness of an explicit emulation procedure based on any potent tree code. (This replaces the need for absolute tree codes in the work of Schulman.) We show that a randomly generated tree code (with suitable constant alphabet size) is an efficiently decodable potent tree code with overwhelming probability. Furthermore we are able to partially derandomize this result by means of epsilon-biased distributions using only $O(n)$ random bits, where $n$ is the depth of the tree.These (derandomized) results allow us to obtain our main result. Our results also extend to the case of interactive multi-party communication among a constant number of parties.
  • Efficient Reconstruction of Random Multilinear Formulas Authors: Ankit Gupta and Neeraj Kayal and Satya Lokam
    In the reconstruction problem for a multivariate polynomial $f$, we have blackbox access to $f$ and the goal is to efficiently reconstruct a representation of $f$ in a suitable model of computation. We give a polynomial time randomized algorithm for reconstructing random multilinear formulas. Our algorithm succeeds with high probability when given blackbox access to the polynomial computed by a random multilinear formula according to a natural distribution. This is the strongest model of computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst-case. Previous results on this problem considered much weaker models such as depth-3 circuits with various restrictions or read-once formulas. Our proof uses ranks of partial derivative matrices as a key ingredient and combines it with analysis of the algebraic structure of random multilinear formulas. Partial derivative matrices have earlier been used to prove lower bounds in a number of models of arithmetic complexity, including multilinear formulas and constant depth circuits. As such, our results give supporting evidence to the general thesis that mathematical properties that capture efficient computation in a model should also enable learning algorithms for functions efficiently computable in that model.
  • New extension of the Weil bound for character sums with applications to coding Authors: Tali Kaufman and Shachar Lovett
    The Weil bound for character sums is a deep result in Algebraic Geometry with many applications both in mathematics and in the theoretical computer science. The Weil bound states that for any polynomial $f(x)$ over a finite field $\mathbb{F}$ and any additive character $\chi:\mathbb{F} \to \mathbb{C}$, either $\chi(f(x))$ is a constant function or it is distributed close to uniform. The Weil bound is quite effective as long as $\deg(f) \ll \sqrt{|\mathbb{F}|}$, but it breaks down when the degree of $f$ exceeds $\sqrt{|\mathbb{F}|}$. As the Weil bound plays a central role in many areas, finding extensions for polynomials of larger degree is an important problem with many possible applications. In this work we develop such an extension over finite fields $\mathbb{F}_{p^n}$ of small characteristic: we prove that if $f(x)=g(x)+h(x)$ where $\deg(g) \ll \sqrt{|\mathbb{F}|}$ and $h(x)$ is a sparse polynomial of arbitrary degree but bounded weight degree, then the same conclusion of the classical Weil bound still holds: either $\chi(f(x))$ is constant or its distribution is close to uniform. In particular, this shows that the subcode of Reed-Muller codes of degree $\omega(1)$ generated by traces of sparse polynomials is a code with near optimal distance, while Reed-Muller of such a degree has no distance (i.e. $o(1)$ distance) ; this is one of the few examples where one can prove that sparse polynomials behave differently from non-sparse polynomials of the same degree. As an application we prove new general results for affine invariant codes. We prove that any affine-invariant subspace of quasi-polynomial size is (1) indeed a code (i.e. has good distance) and (2) is locally testable. Previous results for general affine invariant codes were known only for codes of polynomial size, and of length $2^n$ where $n$ needed to be a prime. Thus, our techniques are the first to extend to general families of such codes of super-polynomial size, where we also remove the requirement from $n$ to be a prime. The proof is based on two main ingredients: the extension of the Weil bound for character sums, and a new Fourier-analytic approach for estimating the weight distribution of general codes with large dual distance, which may be of independent interest.