Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.

Description

Inference is the hardest part of learning. Learning most powerful models requires repeated intractable inference, and approximate inference often interacts badly with parameter optimization. At inference time, an intractable accurate model can effectively become an inaccurate model due to approximate inference. All these problems would be avoided if we learned only tractable models, but standard tractable model classes - like thin junction trees and mixture models - are insufficiently expressive for most applications. However, in recent years a series of surprisingly expressive tractable model classes have been developed, including arithmetic circuits, feature trees, sum-product networks, and tractable Markov logic. I will give an overview of these representations, algorithms for learning them, and their startling successes in challenging applications.

Questions and Answers

You need to be logged in to be able to post here.