Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.


Likelihood-based learning of graphical models faces challenges of computational complexity and robustness to model error. This talk will discuss methods that directly maximize a measure of the accuracy of predicted marginals, in the context of a particular approximate inference algorithm. Experiments suggest that marginalization-based learning, by compensating for both model and inference approximations at training time, can perform better than likelihood-based approximations when the model being fit is approximate in nature.

Questions and Answers

You need to be logged in to be able to post here.