Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.
Many statistical estimators are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizer. Particular examples include $\ell_1$-based methods for sparse vectors, nuclear norm for low-rank matrices, and various combinations thereof for matrix decomposition. In this talk, we describe an interesting connection between computational and statistical efficiency, in particular showing that the same conditions that guarantee that an estimator has low statistical error can also be used to certify fast convergence of first-order optimization methods up to statistical precision.
Questions and AnswersYou need to be logged in to be able to post here.