ICML 2011
TechTalks from event: ICML 2011
Keynotes

Embracing Uncertainty: Applied Machine Learning Comes of AgeICML 2011 Keynote talk with title "Embracing Uncertainty: Applied Machine Learning Comes of Age".

Evolutionary dynamics of competition and cooperationEvolutionary dynamics of competition and cooperation

Machine Learning in Google GogglesMachine Learning in Google Goggles

Building Watson: An Overview of the DeepQA ProjectBuilding Watson: An Overview of the DeepQA Project.
Bandits and Online Learning

Unimodal BanditsWe consider multiarmed bandit problems where the expected reward is unimodal over partially ordered arms. In particular, the arms may belong to a continuous interval or correspond to vertices in a graph, where the graph structure represents similarity in rewards. The unimodality assumption has an important advantage: we can determine if a given arm is optimal by sampling the possible directions around it. This property allows us to quickly and efficiently find the optimal arm and detect abrupt changes in the reward distributions. For the case of bandits on graphs, we incur a regret proportional to the maximal degree and the diameter of the graph, instead of the total number of vertices.

On tracking portfolios with certainty equivalents on a generalization of Markowitz model: the Fool, the Wise and the AdaptivePortfolio allocation theory has been heavily influenced by a major contribution of Harry Markowitz in the early fifties: the meanvariance approach. While there has been a continuous line of works in online learning portfolios over the past decades, very few works have really tried to cope with Markowitz model. A major drawback of the meanvariance approach is that it is approximationfree only when stock returns obey a Gaussian distribution, an assumption known not to hold in real data. In this paper, we first alleviate this assumption, and rigorously lift the meanvariance model to a more general meandivergence model in which stock returns are allowed to obey any exponential family of distributions. We then devise a general online learning algorithm in this setting. We prove for this algorithm the first lower bounds on the most relevant quantity to be optimized in the framework of Markowitz model: the certainty equivalents. Experiments on four realworld stock markets display its ability to track portfolios whose cumulated returns exceed those of the best stock by orders of magnitude.

Beat the Mean BanditThe Dueling Bandits Problem is an online learning framework in which actions are restricted to noisy comparisons between pairs of strategies (also known as bandits). It models settings where absolute rewards are difficult to elicit but pairwise preferences are readily available. In this paper, we extend the Dueling Bandits Problem to a relaxed setting where preference magnitudes can violate transitivity. We present the first algorithm for this more general Dueling Bandits Problem and provide theoretical guarantees in both the online and the PAC settings. Furthermore, we show that the new algorithm has stronger guarantees than existing results even in the original Dueling Bandits Problem, which we validate empirically.

Multiclass Classification with Bandit Feedback using Adaptive RegularizationWe present a new multiclass algorithm in the bandit framework, where after making a prediction, the learning algorithm receives only partial feedback, i.e., a single bit of rightorwrong, rather then the true label. Our algorithm is based on the secondorder Perceptron, and uses upperconfidence bounds to trade off exploration and exploitation. We analyze this algorithm in a partial adversarial setting, where instances are chosen adversarially, while the labels are chosen according to a linear probabilistic model, which is also chosen adversarially. From the theoretical viewpoint, we show a regret of O(sqrt{T}log(T)), which improves over the current best bounds of O(T^{2/3}) in the fully adversarial setting. We evaluate our algorithm on nine realworld text classification problems, obtaining stateoftheart results, even compared with nonbandit online algorithms, especially when label noise is introduced.
Structured Output

An Augmented Lagrangian Approach to Constrained MAP InferenceWe propose a new fast algorithm for approximate MAP inference on factor graphs, which combines augmented Lagrangian optimization with the dual decomposition method. Each slave subproblem is given a quadratic penalty, which pushes toward faster consensus than in previous subgradient approaches. Our algorithm is provably convergent, parallelizable, and suitable for fine decompositions of the graph. We show how it can efficiently handle problems with (possibly global) structural constraints via simple sort operations. Experiments on synthetic and realworld data show that our approach compares favorably with the stateoftheart.

Maxmargin Learning for Lower Linear Envelope Potentials in Binary Markov Random FieldsThe standard approach to maxmargin parameter learning for Markov random fields (MRFs) involves incrementally adding the most violated constraints during each iteration of the algorithm. This requires exact MAP inference, which is intractable for many classes of MRF. In this paper, we propose an exact MAP inference algorithm for binary MRFs containing a class of higherorder models, known as lower linear envelope potentials. Our algorithm is polynomial in the number of variables and number of linear envelope functions. With tractable inference in hand, we show how the parameters and corresponding feature vectors can be represented in a maxmargin framework for efficiently learning lower linear envelope potentials.

Inference of Inversion Transduction GrammarsWe present the first polynomial algorithm for learning a class of inversion transduction grammars (ITGs) that implement context free transducers  functions from strings to strings. The class of transductions that we can learn properly includes all subsequential transductions. These algorithms are based on a generalisation of distributional learning; we prove correctness of our algorithm under an identification in the limit model.

Minimal Loss Hashing for Compact Binary CodesWe propose a method for learning similaritypreserving hash functions that map highdimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hingelike loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms stateoftheart methods.
 All Sessions
 Keynotes
 Bandits and Online Learning
 Structured Output
 Reinforcement Learning
 Graphical Models and Optimization
 Recommendation and Matrix Factorization
 Neural Networks and Statistical Methods
 LatentVariable Models
 LargeScale Learning
 Learning Theory
 Feature Selection, Dimensionality Reduction
 Invited CrossConference Track
 LatentVariable Models
 Active and Online Learning
 Neural Networks and Deep Learning
 Tutorial: Machine Learning in Ecological Science and Environmental Policy
 Tutorial: Machine Learning and Robotics
 Tutorial: Introduction to Bandits: Algorithms and Theory
 Ensemble Methods
 Tutorial: Machine Learning for Large Scale Recommender Systems
 Tutorial: Learning Kernels
 Tutorial : Collective Intelligence and Machine Learning
 TestofTime
 Best Paper
 Robotics and Reinforcement Learning
 Transfer Learning
 Kernel Methods
 Optimization
 Learning Theory
 Invited CrossConference Session
 Neural Networks and Deep Learning
 Reinforcement Learning
 Bayesian Inference and Probabilistic Models
 Supervised Learning
 Social Networks
 Evaluation Metrics
 statistical relational learning
 Outlier Detection
 Time Series
 Graphical Models and Bayesian Inference
 Sparsity and Compressed Sensing
 Clustering
 Game Theory and Planning and Control
 SemiSupervised Learning
 Kernel Methods and Optimization
 Neural Networks and NLP
 Probabilistic Models & MCMC
 Online Learning
 Ranking and Information Retrieval