TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

2B

  • SHARP MIXING TIME BOUNDS FOR SAMPLING RANDOM SURFACES Authors: PIETRO CAPUTO AND FABIO MARTINELLI AND FABIO LUCIO TONINELLI
    We analyze the mixing time of a natural local Markov Chain (Gibbs sampler) for two commonly studied models of random surfaces: (i) discrete monotone surfaces in Z3 with “almost planar” boundary conditions and (ii) the one-dimensional discrete Solid-on-Solid (SOS) model. In both cases we prove the first almost optimal bounds O(L2polylog(L)) where L is the size of the system. Our proof is inspired by the so-called “mean curvature” heuristic: on a large scale, the dynamics should approximate a deterministic motion in which each point of the surface moves according to a drift proportional to the local inverse mean curvature radius. Key technical ingredients are monotonicity, coupling and an argument due to D. Wilson [17] in the framework of lozenge tiling Markov Chains. The novelty of our approach with respect to previous results consists in proving that, with high probability, the dynamics is dominated by a deterministic evolution which, apart from polylog(L) corrections, follows the mean curvature prescription. Our method works equally well for both models despite the fact that their equilibrium maximal deviations from the average height profile occur on very different scales (log L for monotone surfaces and √L for the SOS model.
  • Improved Mixing Condition on the Grid for Counting and Sampling Independent Sets Authors: Ricardo Restrepo and Jinwoo Shin and Prasad Tetali and Eric Vigoda and Linji Yang
    The hard-core model has received much attention in the past couple of decades as a lattice gas model with hard constraints in statistical physics, a multicast model of calls in communication networks, and as a weighted independent set problem in combinatorics, probability and theoretical computer science. In this model, each independent set $I$ in a graph $G$ is weighted proportionally to $\lambda^{|I|}$, for a positive real parameter $\lambda$. For large $\lambda$, computing the partition function (namely, the normalizing constant which makes the weighting a probability distribution on a finite graph) on graphs of maximum degree $\Delta\ge 3$, is a well known computationally challenging problem. More concretely, let $\lambda_c(\T_\Delta)$ denote the critical value for the so-called uniqueness threshold of the hard-core model on the infinite $\Delta$-regular tree; recent breakthrough results of Dror Weitz (2006) and Allan Sly (2010) have identified $\lambda_c(\T_\Delta)$ as a threshold where the hardness of estimating the above partition function undergoes a computational transition. We focus on the well-studied particular case of the square lattice $\integers^2$, and provide a new lower bound for the uniqueness threshold, in particular taking it well above $\lambda_c(\T_4)$. Our technique refines and builds on the tree of self-avoiding walks approach of Weitz, resulting in a new technical sufficient criterion (of wider applicability) for establishing strong spatial mixing (and hence uniqueness) for the hard-core model. Applying our technique to $\integers^2$ we prove that strong spatial mixing holds for all $\lambda<2.3882$, improving upon the work of Weitz that held for $\lambda<27/16=1.6875$. Our results imply a fully-polynomial {\em deterministic} approximation algorithm for estimating the partition function, as well as rapid mixing of the associated Glauber dynamics to sample from the hard-core distribution. While we focus here on the notoriously difficult hard-core model, our approach can also be applied to any 2-spin model, such as the Ising model.
  • Solving connectivity problems parameterized by treewidth in single exponential time Authors: Marek Cygan and Jesper Nederlof and Marcin Pilipczuk and Micha³ Pilipczuk and Johan M. M. van Rooij and Jakub Onufry Wojtaszczyk
    For the vast majority of local problems on graphs of small treewidth (where by local we mean that a solution can be veri&#64257;ed by checking separately the neighbourhood of each vertex), standard dynamic programming techniques give c^tw |V|^O(1) time algorithms, where tw is the treewidth of the input graph G = (V;E) and c is a constant. On the other hand, for problems with a global requirement (usually connectivity) the best&#8211;known algorithms were naive dynamic programming schemes running in at least tw^tw time. We breach this gap by introducing a technique we named Cut&Count that allows to produce c^tw |V|^O(1) time Monte Carlo algorithms for most connectivity-type problems, including HAMILTONIAN PATH, STEINER TREE, FEEDBACK VERTEX SET and CONNECTED DOMINATING SET. These results have numerous consequences in various &#64257;elds, like parameterized complexity, exact and approximate algorithms on planar and H-minor-free graphs and exact algorithms on graphs of bounded degree. In all these &#64257;elds we are able to improve the best-known results for some problems. Also, looking from a more theoretical perspective, our results are surprising since the equivalence relation that partitions all partial solutions with respect to extendability to global solutions seems to consist of at least tw^tw equivalence classes for all these problems. Our results answer an open problem raised by Lokshtanov, Marx and Saurabh [SODA&#8217;11]. In contrast to the problems aiming to minimize the number of connected components that we solve using Cut&Count as mentioned above, we show that, assuming the Exponential Time Hypothesis, the aforementioned gap cannot be breached for some problems that aim to maximize the number of connected components like CYCLE PACKING. The c onstant c in our algorithms is in all cases small (at most 4 for undirected problems and at most 6 for directed ones), and in several cases we are able to show that improving those constants would cause the Strong Exponential Time Hypothesis to fail.
  • The minimum k-way cut of bounded size is fixed-parameter tractable Authors: Mikkel Thorup and Ken-ichi Kawarabayashi
    We consider the minimum k-way cut problem for unweighted undirected graphs with a size bound s on the number of cut edges allowed. Thus we seek to remove as few edges as possible so as to split a graph into k components, or report that this requires cutting more than s edges. We show that this problem is fixed-parameter tractable (FPT) with the standard parameterization in terms of the solution size s. More precisely, for s=O(1), we present a quadratic time algorithm. Moreover, we present a much easier linear time algorithm for planar graphs and bounded genus graphs. Our tractability result stands in contrast to known W[1] hardness of related problems. Without the size bound, Downey et al. [2003] proved that the minimum k-way cut problem is W[1] hard with parameter k, and this is even for simple unweighted graphs. Downey et al. asked about the status for planar graphs. We get linear time with fixed parameter k for simple planar graphs since the minimum k-way cut of a planar graph is of size at most 6k. More generally, we get FTP with parameter k for any graph class with bounded average degree. A simple reduction shows that vertex cuts are at least as hard as edge cuts, so the minimum k-way vertex cut is also W[1] hard with parameter k. Marx [2004] proved that finding a minimum k-way vertex cut of size s is also W[1] hard with parameter s. Marx asked about the FPT status with edge cuts, which we prove tractable here. We are not aware of any other cut problem where the vertex version is W[1] hard but the edge version is FPT, e.g., Marx [2004] proved that the k-terminal cut problem is FTP parameterized by cut size, both for edge and vertex cuts.

3A

  • Multiple-Source Multiple-Sink Maximum Flow in Directed Planar Graphs in Near-Linear Time Authors: Glencora Borradaile and Philip N. Klein and Shay Mozes and Yahav Nussbaum and Christian Wulff-Nilsen
    We give an $O(n log^3 n)$ algorithm that, given an n-node directed planar graph with arc capacities, a set of source nodes, and a set of sink nodes, finds a maximum flow from the sources to the sinks. Previously, the fastest algorithms known for this problem were those for general graphs.
  • Minimum Weight Cycles and Triangles: Equivalences and Algorithms Authors: Liam Roditty and Virginia Vassilevska Williams
    We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected $n$-node graph with edge weights in $\{1,\ldots,M\}$ or in a directed $n$-node graph with edge weights in $\{-M,\ldots , M\}$ and no negative cycles can be efficiently reduced to finding a minimum weight {\em triangle} in an $\Theta(n)-$node \emph{undirected} graph with weights in $\{1,\ldots,O(M)\}$. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be ``encoded'' using only \emph{three} edges within roughly the same weight interval! This resolves a longstanding open problem posed in a seminal work by Itai and Rodeh [SIAM J. Computing 1978 and STOC'77] on minimum cycle in unweighted graphs. A direct consequence of our efficient reductions are $\tilde{O}(Mn^{\omega})\leq \tilde{O}(Mn^{2.376})$-time algorithms using fast matrix multiplication (FMM) for finding a minimum weight cycle in both undirected graphs with integral weights from the interval $[1,M]$ and directed graphs with integral weights from the interval $[-M,M]$. The latter seems to reveal a strong separation between the all pairs shortest paths (APSP) problem and the minimum weight cycle problem in directed graphs as the fastest known APSP algorithm has a running time of $O(M^{0.681}n^{2.575})$ by Zwick [J. ACM 2002]. In contrast, when only combinatorial algorithms are allowed (that is, without FMM) the only known solution to minimum weight cycle is by computing APSP. Interestingly, any separation between the two problems in this case would be an amazing breakthrough as by a recent paper by Vassilevska W. and Williams [FOCS'10], any $O(n^{3-\eps})$-time algorithm ($\eps>0$) for minimum weight cycle immediately implies a $O(n^{3-\delta})$-time algorithm ($\delta>0$) for APSP.
  • Graph Connectivities, Network Coding, and Expander Graphs Authors: Ho Yee Cheung and Lap Chi Lau and Kai Man Leung
    In this paper we present a new algebraic formulation to compute edge connectivities in a directed graph, using the ideas developed in network coding. This reduces the problem of computing edge connectivities to solving systems of linear equations, thus allowing us to use tools in linear algebra to design new algorithms. Using the algebraic formulation we obtain faster algorithms for computing single source edge connectivities and all pairs edge connectivities, in some settings the amortized time to compute the edge connectivity for one pair is sublinear. Through this connection, we have also found an interesting use of expanders and superconcentrators to design fast algorithms for some graph connectivity problems.
  • Maximum Edge-Disjoint Paths in Planar Graphs with Congestion 2 Authors: Loïc Séguin-Charbonneau, F. Bruce Shepherd
    We study the maximum edge-disjoint path problem (\medp) in planar graphs $G=(V,E)$. We are given a set of terminal pairs $s_it_i$, $i=1,2 \ldots , k$ and wish to find a maximum {\em routable} subset of demands. That is, a subset of demands that can be connected by edge-disjoint paths. It is well-known that there is an integrality gap of $\Omega(\sqrt{n})$ for this problem even on a grid-like graph, and hence in planar graphs (Garg et al.). In contrast, Chekuri et al. show that for planar graphs, if {\sc LP} is the optimal solution to the natural LP relaxation for \medp, then there is a subset which is routable in $2G$ that is of size $\Omega(\textsc{opt} /O(\log n))$. Subsequently they showed that $\Omega(\textsc{opt})$ is possible with congestion $4$ (i.e., in $4G$) instead of $2$. We strengthen this latter result to show that a constant approximation is possible also with congestion $2$ (and this is tight via the integrality gap grid example). We use a basic framework from work by Chekuri et al. At the heart of their approach is a 2-phase algorithm that selects an Okamura-Seymour instance. Each of their phases incurs a factor 2 congestion. It is possible to reduce one of the phases to have congestion 1. In order to achieve an overall congestion 2, however, the two phases must share capacity more carefully. For the Phase 1 problem, we extract a problem called {\em rooted clustering} that appears to be an interesting problem class in itself.
  • Online Node-weighted Steiner Tree and Related Problems Authors: Joseph (Seffi) Naor and Debmalya Panigrahi and Mohit Singh
    We obtain the first online algorithms for the node-weighted Steiner tree, Steiner forest and group Steiner tree problems that achieve a poly-logarithmic competitive ratio. Our algorithm for the Steiner tree problem runs in polynomial time, while those for the other two problems take quasi-polynomial time. Our algorithms can be viewed as online LP rounding algorithms in the framework of Buchbinder and Naor; however, while the {\em natural} LP formulation of these problems do lead to fractional algorithms with a poly-logarithmic competitive ratio, we are unable to round these LPs online without losing a polynomial factor. Therefore, we design new LP formulations for these problems drawing on a combination of paradigms such as {\em spider decompositions}, {\em low-depth Steiner trees}, {\em generalized group Steiner problems}, etc. and use the additional structure provided by these to round the more sophisticated LPs losing only a poly-logarithmic factor in the competitive ratio. As further applications of our techniques, we also design polynomial-time online algorithms with polylogarithmic competitive ratios for two fundamental network design problems in edge-weighted graphs: the group Steiner forest problem (thereby resolving an open question raised by Chekuri {\em et al}) and the single source $\ell$-vertex connectivity problem (which complements similar results for the corresponding edge-connectivity problem due to Gupta {\em et al}).

3B

  • Extractors for circuit sources Authors: Emanuele Viola
    We obtain the first deterministic extractors for sources generated (or sampled) by small circuits of bounded depth. Our main results are: (1) We extract $k (k/nd)^{O(1)}$ bits with exponentially small error from $n$-bit sources of min-entropy $k$ that are generated by functions $f : \{0,1\}^\ell \to \{0,1\}^n$ where each output bit depends on $\le d$ input bits. In particular, we extract from $NC^0$ sources, corresponding to $d = O(1)$. (2) We extract $k (k/n^{1+\gamma})^{O(1)}$ bits with super-polynomially small error from $n$-bit sources of min-entropy $k$ that are generated by $\poly(n)$-size $AC^0$ circuits, for any $\gamma > 0$. As our starting point, we revisit the connection by Trevisan and Vadhan (FOCS 2000) between circuit lower bounds and extractors for sources generated by circuits. We note that such extractors (with very weak parameters) are equivalent to lower bounds for generating distributions (FOCS 2010; with Lovett, CCC 2011). Building on those bounds, we prove that the sources in (1) and (2) are (close to) a convex combination of high-entropy ``bit-block'' sources. Introduced here, such sources are a special case of affine ones. As extractors for (1) and (2) one can use the extractor for low-weight affine sources by Rao (CCC 2009). Along the way, we exhibit an explicit boolean function $b : \{0,1\}^n \to \{0,1\}$ such that $\poly(n)$-size $AC^0$ circuits cannot generate the distribution $(x,b(x))$, solving a problem about the complexity of distributions. Independently, De and Watson (ECCC TR11-037) obtain a result similar to (1) in the special case $d = o(\lg n)$.
  • Randomness buys depth for approximate counting Authors: Emanuele Viola
    We show that the promise problem of distinguishing $n$-bit strings of hamming weights $1/2 +/- \Omega(1/\log^{d-1} n)$ can be solved by explicit, randomized (unbounded-fan-in) $\poly(n)$-size depth-$d$ circuits with error $\le 1/3$, but cannot be solved by deterministic $\poly(n)$-size depth-$(d+1)$ circuits, for every $d \ge 2$; and the depth of both is tight. Previous results bounded the depth to within at least an additive 2. Our sharper bounds match Ajtai's simulation of randomized depth-$d$ circuits by deterministic depth-$(d+2)$ circuits (Ann.~Pure Appl.~Logic; '83), and provide an example where randomization (provably) buys resources. \bigskip \emph{Techniques:} To rule out deterministic circuits we combine the switching lemma with an earlier depth-$3$ lower bound by the author (Comp.~Complexity 2009). To exhibit randomized circuits we combine recent analyses by Amano (ICALP '09) and Brody and Verbin (FOCS '10) with derandomization. To make these circuits explicit -- which we find important for the main message of this paper -- we construct a new pseudorandom generator for certain combinatorial rectangle tests. Based on expander walks, the generator for example fools tests $A_1 \times A_2 \times \ldots \times A_{\lg n}$ for $A_i \subseteq [n], |A_i| = n/2$ with error $1/n$ and seed length $O(\lg n)$, improving on the seed length $\Omega(\lg n \lg \lg n)$ of previous constructions.
  • Pseudorandomness for read-once formulas Authors: Andrej Bogdanov, Periklis Papakonstantinou, Andrew Wan
    We give an explicit construction of a pseudorandom generator for read-once formulas whose inputs can be read in arbitrary order. For formulas in $n$ inputs and arbitrary gates of fan-in at most $d = O(n/\log n)$, the pseudorandom generator uses $(1 - \Omega(1))n$ bits of randomness and produces an output that looks $2^{-\Omega(n)}$-pseudorandom to all such formulas. Our analysis is based on the following lemma. Let $pr = Mz + e$, where $M$ is the parity-check matrix of a sufficiently good binary error-correcting code of constant rate, $z$ is a random string, $e$ is a small-bias distribution, and all operations are modulo 2. Then for every pair of functions $f, g\colon \B^{n/2} \to \B$ and every equipartition $(I,J)$ of $[n]$, the distribution $pr$ is pseudorandom for the pair $(f(x|_I), g(x|_J))$, where $x|_I$ and $x|_J$ denote the restriction of $x$ to the coordinates in $I$ and $J$, respectively.
  • Dispersers for affine sources with sub-polynomial entropy Authors: Ronen Shaltiel
    We construct an explicit disperser for affine sources over $\F_2^n$ with entropy $k=2^{\log^{0.9} n}=n^{o(1)}$. This is a polynomial time computable function $\Disp:\F_2^n \ar \B$ such that for every affine space $V$ of $\F_2^n$ that has dimension at least $k$, $\Disp(V)=\set{0,1}$. This improves the best previous construction of \cite{BK} that achieved $k = \Omega(n^{4/5})$. Our technique follows a high level approach that was developed in \cite{BKSSW,BRSW} in the context of dispersers for two independent general sources. The main steps are: \begin{itemize} \item Adjust the high level approach to make it suitable for affine sources. \item Implement a ``challenge-response game'' for affine sources (in the spirit of \cite{BKSSW,BRSW} that introduced such games for two independent general sources). \item In order to implement the game, we construct extractors for affine block-wise sources. For this we use ideas and components from \cite{Rao09}. \item Combining the three items above, we obtain dispersers for affine sources with entropy that larger than $\sqrt{n}$, and we use a recursive win-win analysis in the spirit of \cite{RSW} to get affine dispersers with entropy less than $\sqrt{n}$. \end{itemize} 53.
  • A Small PRG for Polynomial Threshold Functions of Gaussians Authors: Daniel M. Kane
    We develop a new pseudo-random generator for fooling arbitrary degree-$d$ polynomial threshold functions with respect to the Gaussian distribution. Our generator fools such functions to within $\epsilon$ with a generator of seed length $\log(n)2^{O(d)}\epsilon^{-4-c}$, where $c$ is an arbitrarily small positive constant.