Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.

Description

Mirror Descent is a technique for solving nonsmooth problems with convex structure, primarily, convex minimization and convex-concave saddle point problems. Mirror Descent utilizes first order information on the problem and is a far-reaching extension of the classical Subgradient Descent algorithm (N. Shor, 1967). This technique allows to adjust, to some extent, the algorithms to the geometry of the problem at hand and under favorable circumstances results in nearly dimension-independent and unimprovable in the large scale case convergence rates. As a result, in some important cases (e.g., when solving large-scale deterministic and stochastic convex problems on the domains like Euclidean/$\ell_1$/nuclear norm balls), Mirror Descent algorithms become the methods of choice when low and medium accuracy solutions are sought. In the tutorial, we outline the basic Mirror Descent theory for deterministic and stochastic convex minimization and convex-concave saddle point problems, including recent developments aimed at accelerating MD algorithms by utilizing problem's structure.

Questions and Answers

You need to be logged in to be able to post here.