TechTalks from event: ICML 2011

Keynotes

Bandits and Online Learning

  • Unimodal Bandits Authors: Jia Yuan Yu; Shie Mannor
    We consider multiarmed bandit problems where the expected reward is unimodal over partially ordered arms. In particular, the arms may belong to a continuous interval or correspond to vertices in a graph, where the graph structure represents similarity in rewards. The unimodality assumption has an important advantage: we can determine if a given arm is optimal by sampling the possible directions around it. This property allows us to quickly and efficiently find the optimal arm and detect abrupt changes in the reward distributions. For the case of bandits on graphs, we incur a regret proportional to the maximal degree and the diameter of the graph, instead of the total number of vertices.
  • On tracking portfolios with certainty equivalents on a generalization of Markowitz model: the Fool, the Wise and the Adaptive Authors: Richard Nock; Brice Magdalou; Eric Briys; Frank Nielsen
    Portfolio allocation theory has been heavily influenced by a major contribution of Harry Markowitz in the early fifties: the mean-variance approach. While there has been a continuous line of works in on-line learning portfolios over the past decades, very few works have really tried to cope with Markowitz model. A major drawback of the mean-variance approach is that it is approximation-free only when stock returns obey a Gaussian distribution, an assumption known not to hold in real data. In this paper, we first alleviate this assumption, and rigorously lift the mean-variance model to a more general mean-divergence model in which stock returns are allowed to obey any exponential family of distributions. We then devise a general on-line learning algorithm in this setting. We prove for this algorithm the first lower bounds on the most relevant quantity to be optimized in the framework of Markowitz model: the certainty equivalents. Experiments on four real-world stock markets display its ability to track portfolios whose cumulated returns exceed those of the best stock by orders of magnitude.
  • Beat the Mean Bandit Authors: Yisong Yue; Thorsten Joachims
    The Dueling Bandits Problem is an online learning framework in which actions are restricted to noisy comparisons between pairs of strategies (also known as bandits). It models settings where absolute rewards are difficult to elicit but pairwise preferences are readily available. In this paper, we extend the Dueling Bandits Problem to a relaxed setting where preference magnitudes can violate transitivity. We present the first algorithm for this more general Dueling Bandits Problem and provide theoretical guarantees in both the online and the PAC settings. Furthermore, we show that the new algorithm has stronger guarantees than existing results even in the original Dueling Bandits Problem, which we validate empirically.
  • Multiclass Classification with Bandit Feedback using Adaptive Regularization Authors: Koby Crammer; Claudio Gentile
    We present a new multiclass algorithm in the bandit framework, where after making a prediction, the learning algorithm receives only partial feedback, i.e., a single bit of right-or-wrong, rather then the true label. Our algorithm is based on the second-order Perceptron, and uses upper-confidence bounds to trade off exploration and exploitation. We analyze this algorithm in a partial adversarial setting, where instances are chosen adversarially, while the labels are chosen according to a linear probabilistic model, which is also chosen adversarially. From the theoretical viewpoint, we show a regret of O(sqrt{T}log(T)), which improves over the current best bounds of O(T^{2/3}) in the fully adversarial setting. We evaluate our algorithm on nine real-world text classification problems, obtaining state-of-the-art results, even compared with non-bandit online algorithms, especially when label noise is introduced.

Structured Output

  • An Augmented Lagrangian Approach to Constrained MAP Inference Authors: Andre Martins; Mario Figueiredo; Pedro Aguiar; Noah Smith; Eric Xing
    We propose a new fast algorithm for approximate MAP inference on factor graphs, which combines augmented Lagrangian optimization with the dual decomposition method. Each slave subproblem is given a quadratic penalty, which pushes toward faster consensus than in previous subgradient approaches. Our algorithm is provably convergent, parallelizable, and suitable for fine decompositions of the graph. We show how it can efficiently handle problems with (possibly global) structural constraints via simple sort operations. Experiments on synthetic and real-world data show that our approach compares favorably with the state-of-the-art.
  • Max-margin Learning for Lower Linear Envelope Potentials in Binary Markov Random Fields Authors: Stephen Gould
    The standard approach to max-margin parameter learning for Markov random fields (MRFs) involves incrementally adding the most violated constraints during each iteration of the algorithm. This requires exact MAP inference, which is intractable for many classes of MRF. In this paper, we propose an exact MAP inference algorithm for binary MRFs containing a class of higher-order models, known as lower linear envelope potentials. Our algorithm is polynomial in the number of variables and number of linear envelope functions. With tractable inference in hand, we show how the parameters and corresponding feature vectors can be represented in a max-margin framework for efficiently learning lower linear envelope potentials.
  • Inference of Inversion Transduction Grammars Authors: Alexander Clark
    We present the first polynomial algorithm for learning a class of inversion transduction grammars (ITGs) that implement context free transducers -- functions from strings to strings. The class of transductions that we can learn properly includes all subsequential transductions. These algorithms are based on a generalisation of distributional learning; we prove correctness of our algorithm under an identification in the limit model.
  • Minimal Loss Hashing for Compact Binary Codes Authors: Mohammad Norouzi; David Fleet
    We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.