TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

2B

  • SHARP MIXING TIME BOUNDS FOR SAMPLING RANDOM SURFACES Authors: PIETRO CAPUTO AND FABIO MARTINELLI AND FABIO LUCIO TONINELLI
    We analyze the mixing time of a natural local Markov Chain (Gibbs sampler) for two commonly studied models of random surfaces: (i) discrete monotone surfaces in Z3 with “almost planar” boundary conditions and (ii) the one-dimensional discrete Solid-on-Solid (SOS) model. In both cases we prove the first almost optimal bounds O(L2polylog(L)) where L is the size of the system. Our proof is inspired by the so-called “mean curvature” heuristic: on a large scale, the dynamics should approximate a deterministic motion in which each point of the surface moves according to a drift proportional to the local inverse mean curvature radius. Key technical ingredients are monotonicity, coupling and an argument due to D. Wilson [17] in the framework of lozenge tiling Markov Chains. The novelty of our approach with respect to previous results consists in proving that, with high probability, the dynamics is dominated by a deterministic evolution which, apart from polylog(L) corrections, follows the mean curvature prescription. Our method works equally well for both models despite the fact that their equilibrium maximal deviations from the average height profile occur on very different scales (log L for monotone surfaces and √L for the SOS model.
  • Improved Mixing Condition on the Grid for Counting and Sampling Independent Sets Authors: Ricardo Restrepo and Jinwoo Shin and Prasad Tetali and Eric Vigoda and Linji Yang
    The hard-core model has received much attention in the past couple of decades as a lattice gas model with hard constraints in statistical physics, a multicast model of calls in communication networks, and as a weighted independent set problem in combinatorics, probability and theoretical computer science. In this model, each independent set $I$ in a graph $G$ is weighted proportionally to $\lambda^{|I|}$, for a positive real parameter $\lambda$. For large $\lambda$, computing the partition function (namely, the normalizing constant which makes the weighting a probability distribution on a finite graph) on graphs of maximum degree $\Delta\ge 3$, is a well known computationally challenging problem. More concretely, let $\lambda_c(\T_\Delta)$ denote the critical value for the so-called uniqueness threshold of the hard-core model on the infinite $\Delta$-regular tree; recent breakthrough results of Dror Weitz (2006) and Allan Sly (2010) have identified $\lambda_c(\T_\Delta)$ as a threshold where the hardness of estimating the above partition function undergoes a computational transition. We focus on the well-studied particular case of the square lattice $\integers^2$, and provide a new lower bound for the uniqueness threshold, in particular taking it well above $\lambda_c(\T_4)$. Our technique refines and builds on the tree of self-avoiding walks approach of Weitz, resulting in a new technical sufficient criterion (of wider applicability) for establishing strong spatial mixing (and hence uniqueness) for the hard-core model. Applying our technique to $\integers^2$ we prove that strong spatial mixing holds for all $\lambda<2.3882$, improving upon the work of Weitz that held for $\lambda<27/16=1.6875$. Our results imply a fully-polynomial {\em deterministic} approximation algorithm for estimating the partition function, as well as rapid mixing of the associated Glauber dynamics to sample from the hard-core distribution. While we focus here on the notoriously difficult hard-core model, our approach can also be applied to any 2-spin model, such as the Ising model.
  • Solving connectivity problems parameterized by treewidth in single exponential time Authors: Marek Cygan and Jesper Nederlof and Marcin Pilipczuk and Micha³ Pilipczuk and Johan M. M. van Rooij and Jakub Onufry Wojtaszczyk
    For the vast majority of local problems on graphs of small treewidth (where by local we mean that a solution can be veri&#64257;ed by checking separately the neighbourhood of each vertex), standard dynamic programming techniques give c^tw |V|^O(1) time algorithms, where tw is the treewidth of the input graph G = (V;E) and c is a constant. On the other hand, for problems with a global requirement (usually connectivity) the best&#8211;known algorithms were naive dynamic programming schemes running in at least tw^tw time. We breach this gap by introducing a technique we named Cut&Count that allows to produce c^tw |V|^O(1) time Monte Carlo algorithms for most connectivity-type problems, including HAMILTONIAN PATH, STEINER TREE, FEEDBACK VERTEX SET and CONNECTED DOMINATING SET. These results have numerous consequences in various &#64257;elds, like parameterized complexity, exact and approximate algorithms on planar and H-minor-free graphs and exact algorithms on graphs of bounded degree. In all these &#64257;elds we are able to improve the best-known results for some problems. Also, looking from a more theoretical perspective, our results are surprising since the equivalence relation that partitions all partial solutions with respect to extendability to global solutions seems to consist of at least tw^tw equivalence classes for all these problems. Our results answer an open problem raised by Lokshtanov, Marx and Saurabh [SODA&#8217;11]. In contrast to the problems aiming to minimize the number of connected components that we solve using Cut&Count as mentioned above, we show that, assuming the Exponential Time Hypothesis, the aforementioned gap cannot be breached for some problems that aim to maximize the number of connected components like CYCLE PACKING. The c onstant c in our algorithms is in all cases small (at most 4 for undirected problems and at most 6 for directed ones), and in several cases we are able to show that improving those constants would cause the Strong Exponential Time Hypothesis to fail.
  • The minimum k-way cut of bounded size is fixed-parameter tractable Authors: Mikkel Thorup and Ken-ichi Kawarabayashi
    We consider the minimum k-way cut problem for unweighted undirected graphs with a size bound s on the number of cut edges allowed. Thus we seek to remove as few edges as possible so as to split a graph into k components, or report that this requires cutting more than s edges. We show that this problem is fixed-parameter tractable (FPT) with the standard parameterization in terms of the solution size s. More precisely, for s=O(1), we present a quadratic time algorithm. Moreover, we present a much easier linear time algorithm for planar graphs and bounded genus graphs. Our tractability result stands in contrast to known W[1] hardness of related problems. Without the size bound, Downey et al. [2003] proved that the minimum k-way cut problem is W[1] hard with parameter k, and this is even for simple unweighted graphs. Downey et al. asked about the status for planar graphs. We get linear time with fixed parameter k for simple planar graphs since the minimum k-way cut of a planar graph is of size at most 6k. More generally, we get FTP with parameter k for any graph class with bounded average degree. A simple reduction shows that vertex cuts are at least as hard as edge cuts, so the minimum k-way vertex cut is also W[1] hard with parameter k. Marx [2004] proved that finding a minimum k-way vertex cut of size s is also W[1] hard with parameter s. Marx asked about the FPT status with edge cuts, which we prove tractable here. We are not aware of any other cut problem where the vertex version is W[1] hard but the edge version is FPT, e.g., Marx [2004] proved that the k-terminal cut problem is FTP parameterized by cut size, both for edge and vertex cuts.