ICML 2011
TechTalks from event: ICML 2011
Graphical Models and Optimization
-
Dynamic Tree Block Coordinate AscentThis paper proposes a novel Linear Programming (LP) based algorithm, called Dynamic Tree-Block Coordinate Ascent (DTBCA), for performing maximum a posteriori (MAP) inference in probabilistic graphical models. Unlike traditional message passing algorithms, which operate uniformly on the whole factor graph, our method dynamically chooses regions of the factor graph on which to focus message-passing efforts. We propose two criteria for selecting regions, including an efficiently computable upper-bound on the increase in the objective possible by passing messages in any particular region. This bound is derived from the theory of primal-dual methods from combinatorial optimization, and the forest that maximizes the bounds can be chosen efficiently using a maximum-spanning-tree-like algorithm. Experimental results show that our dynamic schedules significantly speed up state-of-the- art LP-based message-passing algorithms on a wide variety of real-world problems.
-
Approximation Bounds for Inference using Cooperative CutsWe analyze a family of probability distributions that are characterized by an embedded combinatorial structure. This family includes models having arbitrary treewidth and arbitrary sized factors. Unlike general models with such freedom, where the “most probable explanation” (MPE) problem is inapproximable, the combinatorial structure within our model, in particular the indirect use of submodularity, leads to several MPE algorithms that all have approximation guarantees.
-
Convex Max-Product over Compact Sets for Protein FoldingIn this paper we present an approach to inference in graphical models with mixture of discrete and bounded continuous variables. In particular, we extend convex max-product to deal with these hybrid models and derive the conditions under which our approach is guaranteed to produce the MAP assignment. When dealing with continuous variables the messages are functions. We investigate a multi-grid approach which can be viewed as a piecewise constant representation of these messages. While this approach provides the theoretical guarantees it is not very practical. Inspired by this, we further propose a particle convex max-product algorithm that significantly outperforms existing particle methods in the task of protein folding and performs comparable to the state-of-the art while using a smaller amount of prior knowledge.
-
On the Use of Variational Inference for Learning Discrete Graphical ModelsWe study the general class of estimators for graphical model structure based on optimizing $ell_1$-regularized approximate log-likelihood, where the approximate likelihood uses tractable variational approximations of the partition function. We provide a message-passing algorithm that emph{directly} computes the $ell_1$ regularized approximate MLE. Further, in the case of certain reweighted entropy approximations to the partition function, we show that surprisingly the $ell_1$ regularized approximate MLE estimator has a emph{closed-form}, so that we would no longer need to run through many iterations of approximate inference and message-passing. Lastly, we analyze this general class of estimators for graph structure recovery, or its emph{sparsistency}, and show that it is indeed sparsistent under certain conditions.
- All Sessions
- Keynotes
- Bandits and Online Learning
- Structured Output
- Reinforcement Learning
- Graphical Models and Optimization
- Recommendation and Matrix Factorization
- Neural Networks and Statistical Methods
- Latent-Variable Models
- Large-Scale Learning
- Learning Theory
- Feature Selection, Dimensionality Reduction
- Invited Cross-Conference Track
- Neural Networks and Deep Learning
- Latent-Variable Models
- Active and Online Learning
- Tutorial : Collective Intelligence and Machine Learning
- Tutorial: Machine Learning in Ecological Science and Environmental Policy
- Tutorial: Machine Learning and Robotics
- Ensemble Methods
- Tutorial: Introduction to Bandits: Algorithms and Theory
- Tutorial: Machine Learning for Large Scale Recommender Systems
- Tutorial: Learning Kernels
- Test-of-Time
- Best Paper
- Robotics and Reinforcement Learning
- Transfer Learning
- Kernel Methods
- Optimization
- Learning Theory
- Invited Cross-Conference Session
- Neural Networks and Deep Learning
- Reinforcement Learning
- Bayesian Inference and Probabilistic Models
- Supervised Learning
- Social Networks
- Evaluation Metrics
- statistical relational learning
- Outlier Detection
- Time Series
- Graphical Models and Bayesian Inference
- Sparsity and Compressed Sensing
- Clustering
- Game Theory and Planning and Control
- Semi-Supervised Learning
- Kernel Methods and Optimization
- Neural Networks and NLP
- Probabilistic Models & MCMC
- Online Learning
- Ranking and Information Retrieval