TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT


  • Efficient Distributed Medium Access Authors: Devavrat Shah and Jinwoo Shin and Prasad Tetali
    Consider a wireless network of n nodes represented by a graph G=(V, E) where an edge (i,j) models the fact that transmissions of i and j interfere with each other, i.e. simultaneous transmissions of i and j become unsuccessful. Hence it is required that at each time instance a set of non-interfering nodes (corresponding to an independent set in G) access the wireless medium. To utilize wireless resources efficiently, it is required to arbitrate the access of medium among interfering nodes properly. Moreover, to be of practical use, such a mechanism is required to be totally distributed as well as simple.As the main result of this paper, we provide such a medium access algorithm. It is randomized, totally distributed and simple: each node attempts to access medium at each time with probability that is a function of its local information. We establish efficiency of the algorithm by showing that the corresponding network Markov chain is positive recurrent as long as the demand imposed on the network can be supported by the wireless network (using any algorithm). In that sense, the proposed algorithm is optimal in terms of utilizing wireless resources. The algorithm is oblivious to the network graph structure, in contrast with the so-called `polynomial back-off' algorithm by Hastad-Leighton-Rogoff (STOC '87, SICOMP '96) that is established to be optimal for the complete graph and bipartite graphs (by Goldberg-MacKenzie (SODA '96, JCSS '99)).
  • Local Distributed Decision Authors: Pierre Fraigniaud and Amos Korman and David Peleg
    A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard $\cal{LOCAL}$ model of computation and define $LD(t)$ (for local decision) as the class of decision problems that can be solved in $t$ communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class $BPLD(t,p,q)$, containing all languages for which there exists a randomized algorithm that runs in $t$ rounds, accepts correct instances with probability at least $p$ and rejects incorrect ones with probability at least $q$. We show that $p^2+q = 1$ is a threshold for the containment of $LD(t)$ in $BPLD(t,p,q)$. More precisely, we show that there exists a language that does not belong to $LD(t)$ for any $t=o(n)$ but does belong to $BPLD(0,p,q)$ for any $p,q\in (0,1]$ such that $p^2+q\leq 1$. On the other hand, we show that, restricted to hereditary languages, $BPLD(t,p,q)=LD(O(t))$, for any function $t$ and any $p,q\in (0,1]$ such that $p^2+q> 1$. In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results.
  • The Complexity of Renaming Authors: Dan Alistarh and James Aspnes and Seth Gilbert and Rachid Guerraoui
    We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove a local lower bound of \Omega(k) process steps for deterministic renaming into any namespace of size sub-exponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies tight bounds for deterministic fetch-and-increment registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our local bound with a global lower bound of \Omega(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c \geq 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetch-and-increment implementations, all tight within logarithmic factors.
  • Mutual Exclusion with O(log2 log n) Amortized Work Authors: Michael A. Bender and Seth Gilbert
    This paper gives a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log2 log n) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, shared-memory model. The algorithm works against an oblivious adversary and guarantees that every process enters the critical section with high probability. The algorithm achieves its efficient performance by exploiting a connection between mutual exclusion and approximate counting. A central aspect of the work presented here is the development and application of efficient approximate-counting data structures. Our mutual-exclusion algorithm represents a departure from previous algorithms in terms of techniques, adversary model, and performance. Most previous mutual exclusion algorithms are based on tournament-tree constructions. The most efficient prior algorithms require O(log n/ log log n) RMRs and work against an adaptive adversary. In this paper, we focus on an oblivious model, and the algorithm in this paper is the first (for any adversary model) that can beat the O(log n/ log log n) RMR bound.
  • Algorithms for the Generalized Sorting Problem Authors: Zhiyi Huang and Sampath Kannan and Sanjeev Khanna
    We study the generalized sorting problem where we are given a set of $n$ elements to be sorted but only a subset of all possible pairwise element comparisons is allowed. The goal is to determine the sorted order using the smallest possible number of allowed comparisons. The generalized sorting problem may be equivalently viewed as follows. Given an undirected graph $G(V,E)$ where $V$ is the set of elements to be sorted and $E$ defines the set of allowed comparisons, adaptively find the smallest subset $E' \subseteq E$ of edges to probe such that the directed graph induced by $E'$ contains a Hamiltonian path. When $G$ is a complete graph, we get the standard sorting problem, and it is well-known that $\Theta(n \log n)$ comparisons are necessary and sufficient. An extensively studied special case of the generalized sorting problem is the nuts and bolts problem where the allowed comparison graph is a complete bipartite graph between two equal-size sets. It is known that for this special case also, there is a deterministic algorithm that sorts using $\Theta(n \log n)$ comparisons. However, when the allowed comparison graph is arbitrary, to our knowledge, no bound better than the trivial $O(n^2)$ bound is known. Our main result is a randomized algorithm that sorts any allowed comparison graph using $\wt{O}(n^{3/2})$ comparisons with high probability (provided the input is sortable). We also study the sorting problem in randomly generated allowed comparison graphs, and show that when the edge probability is $p$, $\wt{O}(\min\{\frac{n}{p^2},n^{3/2} \sqrt{p}\})$ comparisons suffice on average to sort.


  • Information Equals Amortized Communication Authors: Mark Braverman and Anup Rao
    We show how to efficiently simulate the sending of a message $M$ to a receiver who has partial information about the message, so that the expected number of bits communicated in the simulation is close to the amount of additional information that the message reveals to the receiver. This is a generalization and strengthening of the Slepian-Wolf theorem, which shows how to carry out such a simulation with low \emph{amortized} communication in the case that $M$ is a deterministic function of $X$. A caveat is that our simulation is interactive. As a consequence, we obtain new relationships between the randomized amortized communication complexity of a function, and its information complexity. We prove that for any given distribution on inputs, the internal information cost (namely the information revealed to the parties) involved in computing any relation or function using a two party interactive protocol is {\em exactly} equal to the amortized communication complexity of computing independent copies of the same relation or function. Here by amortized communication complexity we mean the average per copy communication in the best protocol for computing multiple copies, with a bound on the error in each copy (i.e.\ we require only that the output in each coordinate is correct with good probability, and do not require that all outputs are simultaneously correct). This significantly simplifies the relationships between the various measures of complexity for average case communication protocols, and proves that if a function's information cost is smaller than its communication complexity, then multiple copies of the function can be computed more efficiently in parallel than sequentially. Finally, we show that the only way to prove a strong direct sum theorem for randomized communication complexity is by solving a particular variant of the pointer jumping problem that we define. If this problem has a cheap communication protocol, then a strong direct sum theorem must hold. On the other hand, if it does not, then the problem itself gives a counterexample for the direct sum question. In the process we show that a strong direct sum theorem for communication complexity holds if and only if efficient compression of communication protocols is possible.
  • Delays and the Capacity of Continuous-time Channels Authors: Sanjeev Khanna and Madhu Sudan
    Any physical channel of communication offers two potential reasons why its capacity (the number of bits it can transmit in a unit of time) might be unbounded: (1) (Uncountably) infinitely many choices of signal strength at any given time, and (2) (Uncountably) infinitely many instances of time at which signals may be sent. However channel noise cancels out the potential unboundedness of the first aspect, leaving typical channels with only a finite capacity per instant of time. The latter source of infinity seems less extensively studied. A potential source of unreliability that might restrict the capacity also from the second aspect is ``delay'': Signals transmitted by the sender at a given point of time may not be received with a predictable delay at the receiving end. In this work we examine this source of uncertainty by considering a simple discrete model of delay errors. In our model the communicating parties get to subdivide time as finely as they wish, but still have to cope with communication delays that are variable. The continuous process becomes the limit of our process as the time subdivision becomes infinitesimal. We analyze the limits of such channels and reach somewhat surprising conclusions: The capacity of a physical channel is finitely bounded only if at least one of the two sources of error (signal noise or delay noise) is adversarial. If both error sources are stochastic, or the adversarial source is noise that is independent of the stochastic delay, the capacity of the associated physical channel is infinite!
  • Efficient and Explicit Coding for Interactive Communication Authors: Ran Gelles and Ankur Moitra and Amit Sahai
    In this work, we study the fundamental problem of reliable interactive communication over a noisy channel. In a breakthrough sequence of papers published in 1992 and 1993, Schulman gave non-constructive proofs of the existence of general methods to emulate any two-party interactive protocol such that: (1) the emulation protocol only takes a constant-factor longer than the original protocol, and (2) if the emulation protocol is executed over any discrete memoryless noisy channel with constant capacity, then the probability that the emulation protocol fails to perfectly emulate the original protocol is exponentially small in the total length of the protocol. Unfortunately, Schulman's emulation procedures either only work in a nonstandard model with a large amount of shared randomness, or are non-constructive in that they rely on the existence of "absolute" tree codes. The only known proofs of the existence of absolute tree codes are non-constructive, and finding an explicit construction remains an important open problem. Indeed, randomly generated tree codes are not absolute tree codes with overwhelming probability. In this work, we revisit the problem of reliable interactive communication, and obtain the first fully explicit (randomized) efficient constant-rate emulation procedure for reliable interactive communication. Our protocol works for any discrete memoryless noisy channel with constant capacity, and our protocol's probability of failure is exponentially small in the total length of the protocol. We accomplish this goal by obtaining the following results: We introduce a new notion of goodness for a tree code, and define the notion of a potent tree code. We believe that this notion is of independent interest. We prove the correctness of an explicit emulation procedure based on any potent tree code. (This replaces the need for absolute tree codes in the work of Schulman.) We show that a randomly generated tree code (with suitable constant alphabet size) is an efficiently decodable potent tree code with overwhelming probability. Furthermore we are able to partially derandomize this result by means of epsilon-biased distributions using only $O(n)$ random bits, where $n$ is the depth of the tree.These (derandomized) results allow us to obtain our main result. Our results also extend to the case of interactive multi-party communication among a constant number of parties.
  • Efficient Reconstruction of Random Multilinear Formulas Authors: Ankit Gupta and Neeraj Kayal and Satya Lokam
    In the reconstruction problem for a multivariate polynomial $f$, we have blackbox access to $f$ and the goal is to efficiently reconstruct a representation of $f$ in a suitable model of computation. We give a polynomial time randomized algorithm for reconstructing random multilinear formulas. Our algorithm succeeds with high probability when given blackbox access to the polynomial computed by a random multilinear formula according to a natural distribution. This is the strongest model of computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst-case. Previous results on this problem considered much weaker models such as depth-3 circuits with various restrictions or read-once formulas. Our proof uses ranks of partial derivative matrices as a key ingredient and combines it with analysis of the algebraic structure of random multilinear formulas. Partial derivative matrices have earlier been used to prove lower bounds in a number of models of arithmetic complexity, including multilinear formulas and constant depth circuits. As such, our results give supporting evidence to the general thesis that mathematical properties that capture efficient computation in a model should also enable learning algorithms for functions efficiently computable in that model.
  • New extension of the Weil bound for character sums with applications to coding Authors: Tali Kaufman and Shachar Lovett
    The Weil bound for character sums is a deep result in Algebraic Geometry with many applications both in mathematics and in the theoretical computer science. The Weil bound states that for any polynomial $f(x)$ over a finite field $\mathbb{F}$ and any additive character $\chi:\mathbb{F} \to \mathbb{C}$, either $\chi(f(x))$ is a constant function or it is distributed close to uniform. The Weil bound is quite effective as long as $\deg(f) \ll \sqrt{|\mathbb{F}|}$, but it breaks down when the degree of $f$ exceeds $\sqrt{|\mathbb{F}|}$. As the Weil bound plays a central role in many areas, finding extensions for polynomials of larger degree is an important problem with many possible applications. In this work we develop such an extension over finite fields $\mathbb{F}_{p^n}$ of small characteristic: we prove that if $f(x)=g(x)+h(x)$ where $\deg(g) \ll \sqrt{|\mathbb{F}|}$ and $h(x)$ is a sparse polynomial of arbitrary degree but bounded weight degree, then the same conclusion of the classical Weil bound still holds: either $\chi(f(x))$ is constant or its distribution is close to uniform. In particular, this shows that the subcode of Reed-Muller codes of degree $\omega(1)$ generated by traces of sparse polynomials is a code with near optimal distance, while Reed-Muller of such a degree has no distance (i.e. $o(1)$ distance) ; this is one of the few examples where one can prove that sparse polynomials behave differently from non-sparse polynomials of the same degree. As an application we prove new general results for affine invariant codes. We prove that any affine-invariant subspace of quasi-polynomial size is (1) indeed a code (i.e. has good distance) and (2) is locally testable. Previous results for general affine invariant codes were known only for codes of polynomial size, and of length $2^n$ where $n$ needed to be a prime. Thus, our techniques are the first to extend to general families of such codes of super-polynomial size, where we also remove the requirement from $n$ to be a prime. The proof is based on two main ingredients: the extension of the Weil bound for character sums, and a new Fourier-analytic approach for estimating the weight distribution of general codes with large dual distance, which may be of independent interest.


  • Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems Authors: Jian Li and Amol Deshpande
    We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings over probabilistic graphs, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of {\em risk-averse} or {\em risk-prone} behaviors, and instead we consider a more general objective which is to maximize the {\em expected utility} of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). We show that we can obtain a polynomial time approximation algorithm with {\em additive error} $\epsilon$ for any $\epsilon>0$, if there is a pseudopolynomial time algorithm for the {\em exact} version of the problem.(This is true for the problems mentioned above). Our result generalizes several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack.Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems.
  • Approximation Algorithms for Submodular Multiway Partition Authors: Chandra Chekuri and Alina Ene
    We study algorithms for the {\sc Submodular Multiway Partition} problem (\SubMP). An instance of \SubMP consists of a finite ground set $V$, a subset of $k$ elements $S = \{s_1,s_2,\ldots,s_k\}$ called terminals, and a non-negative submodular set function $f:2^V\rightarrow \mathbb{R}_+$ on $V$ provided as a value oracle. The goal is to partition $V$ into $k$ sets $A_1,\ldots,A_k$ such that for $1 \le i \le k$, $s_i \in A_i$ and $\sum_{i=1}^k f(A_i)$ is minimized. \SubMP generalizes some well-known problems such as the {\sc Multiway Cut} problem in graphs and hypergraphs, and the {\sc Node-weighed Multiway Cut} problem in graphs. \SubMP for arbitrary submodular functions (instead of just symmetric functions) was considered by Zhao, Nagamochi and Ibaraki \cite{ZhaoNI05}. Previous algorithms were based on greedy splitting and divide and conquer strategies. In very recent work \cite{ChekuriE11} we proposed a convex-programming relaxation for \SubMP based on the Lov\'asz-extension of a submodular function and showed its applicability for some special cases. In this paper we obtain the following results for arbitrary submodular functions via this relaxation. \begin{itemize} \item A $2$-approximation for \SubMP. This improves the $(k-1)$-approximation from \cite{ZhaoNI05}. \item A $(1.5-1/k)$-approximation for \SubMP when $f$ is {\em symmetric}. This improves the $2(1-1/k)$-approximation from \cite{Queyranne99,ZhaoNI05}. \end{itemize}
  • An FPTAS for #Knapsack and Related Counting Problems Authors: Parikshit Gopalan and Adam Klivans and Raghu Meka and Daniel Stefankovic and Santosh Vempala and Eric Vigoda
    Given n elements with non-negative integer weights w_1,...,w_n and an integer capacity C, we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most C. We give the first deterministic, fully polynomial-time approximation scheme (FPTAS) for estimating the number of solutions to any knapsack constraint (our estimate has relative error $1\pm\epsilon$). Our algorithm is based on dynamic programming. Previously, randomized polynomial-time approximation schemes (FPRAS) were known first by Morris and Sinclair via Markov chain Monte Carlo techniques, and subsequently by Dyer via dynamic programming and rejection sampling. In addition, we present a new method for deterministic approximate counting using read-once branching programs. Our approach yields an FPTAS for several other counting problems, including counting solutions for the multidimensional knapsack problem with a constant number of constraints, the general integer knapsack problem, and the contingency tables problem with a constant number of rows. 40.
  • Approximation Algorithms for Correlated Knaspacks and Non-Martingale Bandits Authors: Anupam Gupta and Ravishankar Krishnaswamy and Marco Molinaro and R. Ravi
    In the stochastic knapsack problem, we are given a knapsack of size B, and a set of items whose sizes and rewards are drawn from a known probability distribution. However, the only way to know the actual size and reward is to schedule the item—when it completes, we get to know these values. The goal is to schedule these items (possibly making adaptive decisions based on the sizes seen thus far) to maximize the expected total reward of items which successfully pack into the knapsack. We know constant-factor approximations when (i) the rewards and sizes are independent of each other, and (ii) we cannot prematurely cancel items after we schedule them. What can we say when either or both of these assumptions are relaxed? Related to this are other stochastic packing problems like the multi-armed bandit (and budgeted learning) problems; here one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to (adaptively) decide which arms to pull, in order to maximize the expected reward obtained after B pulls in total. Much recent work on this problem focus on the case when the evolution of each arm follows a martingale, i.e., when the expected reward from one pull of an arm is the same as the reward at the current state. What can we say when the rewards do not form a martingale? In this paper, we give constant-factor approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations. Extending ideas we develop for this problem, we also give constant-factor approximations for MAB problems without the martingale assumption. Indeed, we can show that previously proposed linear programming relaxations for these problems have large integrality gaps. So we propose new time-indexed LP relaxations; using a decomposition and “gap-filling” approach, we convert these fractional solutions to distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise randomized adaptive scheduling algorithms. We hope our LP formulation and decomposition methods may provide a new way to address other stochastic optimization problems with more general contexts.
  • Evolution with Recombination Authors: Varun Kanade
    Valiant (2007) introduced a computational model of evolution and suggested that Darwinian evolution be studied in the framework of computational learning theory. Valiant describes evolution as a restricted form of learning where exploration is limited to a set of possible mutations and feedback is received by the survival of the fittest mutant. In subsequent work Feldman (2008) showed that evolvability in Valiant's model is equivalent to learning in the correlational statistical query (CSQ) model. We extend Valiant's model to include genetic recombination and show that in certain cases, recombination can significantly speed-up the process of evolution in terms of the number of generations, though at the expense of population size. This follows by a reduction from parallel -CSQ algorithms to evolution with recombination. This gives an exponential speed-up (in terms of the number of generations) over previous known results for evolving conjunctions and halfspaces with respect to restricted distributions.