TechTalks from event: ICML 2011

Tutorial: Introduction to Bandits: Algorithms and Theory

  • Tutorial: Introduction to Bandits: Algorithms and Theory Authors: Jean-Yves Audibert, Rémi Munos
    This tutorial intends to be an introduction to stochastic and adversarial multi-armed bandit algorithms and to survey some of the recent advances. In the multi-armed bandit problem, at each stage, an agent (or decision maker) chooses one action (or arm), and receives a reward from it. The agent aims at maximizing his rewards. Since he does not know the process generating the rewards, he needs to explore (try) the different actions and yet, exploit (concentrate its draws on) the seemingly most rewarding arms. The bandit problem has been increasingly popular in the machine learning community. It is the simplest setting where one encounters the exploration-exploitation dilemma. It has a wide range of applications including advertizement [1, 6], economics [2, 12], games [7] and optimization [10, 5, 9, 3], model selection and machine learning algorithms itself [13, 4]. It can be a central building block of larger systems, like in evolutionary programming [8] and reinforcement learning [14], in particular in large state space Markovian Decision Problems [11].
  • Tutorial: Introduction to Bandits: Algorithms and Theory - Part 2 Authors: Jean-Yves Audibert, Remi Munos
    Continuation of Part 1

Ensemble Methods

  • Efficient Rule Ensemble Learning using Hierarchical Kernels Authors: Pratik Jawanpuria; Saketha Nath Jagarlapudi; Ganesh Ramakrishnan
    This paper addresses the problem of Rule Ensemble Learning (REL), where the goal is simultaneous discovery of a small set of simple rules and their optimal weights that lead to good generalization. Rules are assumed to be conjunctions of basic propositions concerning the values taken by the input features. From the perspectives of interpretability as well as generalization, it is highly desirable to construct rule ensembles with low training error, having rules that are i) simple, {em i.e.}, involve few conjunctions and ii) few in number. We propose to explore the (exponentially) large feature space of all possible conjunctions optimally and efficiently by employing the recently introduced Hierarchical Kernel Learning (HKL) framework. The regularizer employed in the HKL formulation can be interpreted as a potential for discouraging selection of rules involving large number of conjunctions -- justifying its suitability for constructing rule ensembles. Simulation results show that the proposed approach improves over state-of-the-art REL algorithms in terms of generalization and indeed learns simple rules. Although this is encouraging, it can be shown that HKL selects a conjunction only if all its subsets are selected, and this is highly undesirable. We propose a novel convex formulation which alleviates this problem and generalizes the HKL framework. The main technical contribution of this paper is an efficient mirror-descent based active set algorithm for solving the new formulation. Empirical evaluations on REL problems illustrate the utility of generalized HKL.
  • Boosting on a Budget: Sampling for Feature-Efficient Prediction Authors: Lev Reyzin
    In this paper, we tackle the problem of feature-efficient prediction: classification using a limited number of features per test example. We show that modifying an ensemble classifier such as AdaBoost, by sampling hypotheses from its final weighted predictor, is well-suited for this task. We further consider an extension of this problem, where the costs of examining the various features can differ from one another, and we give an algorithm for this more general setting. We prove the correctness of our algorithms and derive bounds for the number of samples needed for given error rates. We also experimentally verify the effectiveness of our methods.
  • Multiclass Boosting with Hinge Loss based on Output Coding Authors: Tianshi Gao; Daphne Koller
    Multiclass classification is an important and fundamental problem in machine learning. A popular family of multiclass classification methods belongs to reducing multiclass to binary based on output coding. Several multiclass boosting algorithms have been proposed to learn the coding matrix and the associated binary classifiers in a problem-dependent way. These algorithms can be unified under a sum-of-exponential loss function defined in the domain of margins. Instead, multiclass SVM uses another type of loss function based on hinge loss. In this paper, we present a new output-coding-based multiclass boosting algorithm using the multiclass hinge loss, which we call HingeBoost.OC. HingeBoost.OC is tested on various real world datasets and shows better performance than the existing multiclass boosting algorithm AdaBoost.ERP, one-vs-one, one-vs-all, ECOC and multiclass SVM in a majority of different cases.
  • Generalized Boosting Algorithms for Convex Optimization Authors: Alexander Grubb; Drew Bagnell
    Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks, we analyze gradient-based descent algorithms for boosting with respect to any convex objective and introduce a new measure of weak learner performance into this setting which generalizes existing work. We present the first weak to strong learning guarantees for the existing gradient boosting work for smooth convex objectives, and also demonstrate that this work fails for non-smooth objectives. To address this issue, we present new algorithms which extend this boosting approach to arbitrary convex loss functions and give corresponding weak to strong convergence results. In addition, we demonstrate experimental results that support our analysis and demonstrate the need for the new algorithms we present.

Tutorial: Machine Learning for Large Scale Recommender Systems