TechTalks from event: ICML 2011

Graphical Models and Optimization

  • Dynamic Tree Block Coordinate Ascent Authors: Daniel Tarlow; Dhruv Batra; Pushmeet Kohli; Vladimir Kolmogorov
    This paper proposes a novel Linear Programming (LP) based algorithm, called Dynamic Tree-Block Coordinate Ascent (DTBCA), for performing maximum a posteriori (MAP) inference in probabilistic graphical models. Unlike traditional message passing algorithms, which operate uniformly on the whole factor graph, our method dynamically chooses regions of the factor graph on which to focus message-passing efforts. We propose two criteria for selecting regions, including an efficiently computable upper-bound on the increase in the objective possible by passing messages in any particular region. This bound is derived from the theory of primal-dual methods from combinatorial optimization, and the forest that maximizes the bounds can be chosen efficiently using a maximum-spanning-tree-like algorithm. Experimental results show that our dynamic schedules significantly speed up state-of-the- art LP-based message-passing algorithms on a wide variety of real-world problems.
  • Approximation Bounds for Inference using Cooperative Cuts Authors: Stefanie Jegelka; Jeff Bilmes
    We analyze a family of probability distributions that are characterized by an embedded combinatorial structure. This family includes models having arbitrary treewidth and arbitrary sized factors. Unlike general models with such freedom, where the “most probable explanation” (MPE) problem is inapproximable, the combinatorial structure within our model, in particular the indirect use of submodularity, leads to several MPE algorithms that all have approximation guarantees.
  • Convex Max-Product over Compact Sets for Protein Folding Authors: Jian Peng; Tamir Hazan; David McAllester; Raquel Urtasun
    In this paper we present an approach to inference in graphical models with mixture of discrete and bounded continuous variables. In particular, we extend convex max-product to deal with these hybrid models and derive the conditions under which our approach is guaranteed to produce the MAP assignment. When dealing with continuous variables the messages are functions. We investigate a multi-grid approach which can be viewed as a piecewise constant representation of these messages. While this approach provides the theoretical guarantees it is not very practical. Inspired by this, we further propose a particle convex max-product algorithm that significantly outperforms existing particle methods in the task of protein folding and performs comparable to the state-of-the art while using a smaller amount of prior knowledge.
  • On the Use of Variational Inference for Learning Discrete Graphical Models Authors: Eunho Yang; Pradeep Ravikumar
    We study the general class of estimators for graphical model structure based on optimizing $ell_1$-regularized approximate log-likelihood, where the approximate likelihood uses tractable variational approximations of the partition function. We provide a message-passing algorithm that emph{directly} computes the $ell_1$ regularized approximate MLE. Further, in the case of certain reweighted entropy approximations to the partition function, we show that surprisingly the $ell_1$ regularized approximate MLE estimator has a emph{closed-form}, so that we would no longer need to run through many iterations of approximate inference and message-passing. Lastly, we analyze this general class of estimators for graph structure recovery, or its emph{sparsistency}, and show that it is indeed sparsistent under certain conditions.