TechTalks from event: ICML 2011

Robotics and Reinforcement Learning

  • Conjugate Markov Decision Processes Authors: Philip Thomas; Andrew Barto
    Many open problems involve the search for a mapping that is used by an algorithm solving an MDP. Useful mappings are often from the state set to some other set. Examples include representation discovery (a mapping to a feature space) and skill discovery (a mapping to skill termination probabilities). Different mappings result in algorithms achieving varying expected returns. In this paper we present a novel approach to the search for any mapping used by any algorithm attempting to solve an MDP, for that which results in maximum expected return.
  • Approximate Dynamic Programming for Storage Problems Authors: Lauren Hannah; David Dunson
    Storage problems are an important subclass of stochastic control problems. This paper presents a new method, approximate dynamic programming for storage, to solve storage problems with continuous, convex decision sets. Unlike other solution procedures, ADPS allows math programming to be used to make decisions each time period, even in the presence of large state variables. We test ADPS on the day ahead wind commitment problem with storage.
  • Apprenticeship Learning About Multiple Intentions Authors: Monica Babes; Vukosi Marivate; Michael Littman; Kaushik Subramanian
    In this paper, we apply tools from inverse reinforcement learning (IRL) to the problem of learning from (unlabeled) demonstration trajectories of behavior generated by varying ``intentions'' or objectives. We derive an EM approach that clusters observed trajectories by inferring the objectives for each cluster using any of several possible IRL methods, and then uses the constructed clusters to quickly identify the intent of a trajectory. We show that a natural approach to IRL---a gradient ascent method that modifies reward parameters to maximize the likelihood of the observed trajectories---is successful at quickly identifying unknown reward functions. We demonstrate these ideas in the context of apprenticeship learning by acquiring the preferences of a human driver in a simple highway car simulator.
  • Classification-based Policy Iteration with a Critic Authors: Victor Gabillon; Alessandro Lazaric; Mohammad Ghavamzadeh; Bruno Scherrer
    In this paper, we study the effect of adding a value function approximation component (critic) to rollout classification-based policy iteration (RCPI) algorithms. The idea is to use a critic to approximate the return after we truncate the rollout trajectories. This allows us to control the bias and variance of the rollout estimates of the action-value function. Therefore, the introduction of a critic can improve the accuracy of the rollout estimates, and as a result, enhance the performance of the RCPI algorithm. We present a new RCPI algorithm, called direct policy iteration with critic (DPI-Critic), and provide its finite-sample analysis when the critic is based on the LSTD method. We empirically evaluate the performance of DPI-Critic and compare it with DPI and LSPI in two benchmark reinforcement learning problems.

Transfer Learning

  • A Graph-based Framework for Multi-Task Multi-View Learning Authors: Jingrui He; Rick Lawrence
    Many real-world problems exhibit dual-heterogeneity. A single learning task might have features in multiple views (i.e., feature heterogeneity); multiple learning tasks might be related with each other through one or more shared views (i.e., task heterogeneity). Existing multi-task learning or multi-view learning algorithms only capture one type of heterogeneity. In this paper, we introduce Multi-Task Multi-View (M^2TV) learning for such complicated learning problems with both feature heterogeneity and task heterogeneity. We propose a graph-based framework (GraM^2) to take full advantage of the dual-heterogeneous nature. Our framework has a natural connection to Reproducing Kernel Hilbert Space (RKHS). Furthermore, we propose an iterative algorithm (IteM^2) for GraM^2 framework, and analyze its optimality, convergence and time complexity. Experimental results on various real data sets demonstrate its effectiveness.
  • Learning from Multiple Outlooks Authors: Maayan Harel; Shie Mannor
    We propose a novel problem formulation of learning a single task when the data are provided in different feature spaces. Each such space is called an outlook, and is assumed to contain both labeled and unlabeled data. The objective is to take advantage of the data from all the outlooks to better classify each of the outlooks. We devise an algorithm that computes optimal affine mappings from different outlooks to a target outlook by matching moments of the empirical distributions. We further derive a probabilistic interpretation of the resulting algorithm and a sample complexity bound indicating how many samples are needed to adequately find the mapping. We report the results of extensive experiments on activity recognition tasks that show the value of the proposed approach in boosting performance.
  • Learning with Whom to Share in Multi-task Feature Learning Authors: Zhuoliang Kang; Kristen Grauman; Fei Sha
    In multi-task learning (MTL), multiple tasks are learnt jointly. A major assumption for this paradigm is that all those tasks are indeed related so that the joint training is appropriate and beneficial. In this paper, we study the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining ``with whom'' each task should share. We formulate the problem as a mixed integer programming and provide an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. The algorithm monotonically decreases the objective function and converges to a local optimum. Compared to the standard MTL paradigm where all tasks are in a single group, our algorithm improves its performance with statistical significance for three out of the four datasets we have studied. We also demonstrate its advantage over other task grouping techniques investigated in literature.
  • Hierarchical Classification via Orthogonal Transfer Authors: Lin Xiao; Dengyong Zhou; Mingrui Wu
    We consider multiclass classification problems where the set of labels are organized hierarchically as a category tree. We associate each node in the tree with a classifier and classify the examples recursively from the root to the leaves. We propose a hierarchical Support Vector Machine (SVM) that encourages the classifier at each node to be different from the classifiers at its ancestors. More specifically, we introduce regularizations that force the normal vector of the classifying hyperplane at each node to be orthogonal to those at its ancestors as much as possible. We establish conditions under which training such a hierarchical SVM is a convex optimization problem, and develop an efficient dual-averaging method for solving it. We evaluate the method on a number of real-world text categorization tasks and obtain state-of-the-art performance.

Kernel Methods

  • BCDNPKL: Scalable Non-Parametric Kernel Learning Using Block Coordinate Descent Authors: En-Liang Hu; Bo Wang; SongCan Chen
    Most existing approaches for non-parametric kernel learning (NPKL) suffer from expensive computation, which would limit their applications to large-scale problems. To address the scalability problem of NPKL, we propose a novel algorithm called BCDNPKL, which is very efficient and scalable. Superior to most existing approaches, BCDNPKL keeps away from semidefinite programming (SDP) and eigen-decomposition, which benefits from two findings: 1) The original SDP framework of NPKL can be reduced into a far smaller-sized counterpart which is corresponding to the sub-kernel (referred to as boundary kernel) learning; 2) The sub-kernel learning can be efficiently solved by using the proposed block coordinate descent (BCD) technique. We provide a formal proof of global convergence for the proposed BCDNPKL algorithm. The extensive experiments verify the scalability and effectiveness of BCDNPKL, compared with the state-of-the-art algorithms.
  • Ultra-Fast Optimization Algorithm for Sparse Multi Kernel Learning Authors: Francesco Orabona; Luo Jie
    Many state-of-the-art approaches for Multi Kernel Learning (MKL) struggle at finding a compromise between performance, sparsity of the solution and speed of the optimization process. In this paper we look at the MKL problem at the same time from a learning and optimization point of view. So, instead of designing a regularizer and then struggling to find an efficient method to minimize it, we design the regularizer while keeping the optimization algorithm in mind. Hence, we introduce a novel MKL formulation, which mixes elements of p-norm and elastic-net kind of regularization. We also propose a fast stochastic gradient descent method that solves the novel MKL formulation. We show theoretically and empirically that our method has 1) state-of-the-art performance on many classification tasks; 2) exact sparse solutions with a tunable level of sparsity; 3) a convergence rate bound that depends only logarithmically on the number of kernels used, and is independent of the sparsity required; 4) independence on the particular convex loss function used.
  • Fast Global Alignment Kernels Authors: Marco Cuturi
    We propose novel approaches to cast the widely-used family of Dynamic Time Warping (DTW) distances and similarities as positive definite kernels for time series. To this effect, we provide new theoretical insights on the family of Global Alignment kernels introduced by Cuturi et al. (2007) and propose alternative kernels which are both positive definite and faster to compute. We provide experimental evidence that these alternatives are both faster and more efficient in classification tasks than other kernels based on the DTW formalism.
  • Mapping kernels for trees Authors: Kilho Shin; Marco Cuturi; Tetsuji Kuboyama
    We propose a comprehensive survey of tree kernels through the lens of the mapping kernels framework. We argue that most existing tree kernels, as well as many more that are presented for the first time in this paper, fall into a typology of kernels whose seemingly intricate computation can be efficiently factorized to yield polynomial time algorithms. Despite this fact, we argue that a naive implementation of such kernels remains prohibitively expensive to compute. We propose an approach whereby some computations for smaller trees are cached, which speeds up considerably the computation of all these tree kernels. We provide experimental evidence of this fact as well as preliminary results on the performance of these kernels.