NIPS 2012 Workshop on Log-Linear Models
TechTalks from event: NIPS 2012 Workshop on Log-Linear Models
- All Talks
- Opening Remarks
- Convex Relaxations for Latent Variable Models
- Fast Deterministic Dropout Training
- Sparse Gaussian Conditional Random Fields
- Improving Training Time of Deep Belief Networks through Hybrid Pre-training and Large Batch Sizes
- Newton Methods for Large Scale Optimization of Matrix Functions
- Second Order Methods for Sparse Inverse Covariance Clustering
- Stochastic Approximation and Fast Message-Passing in Graphical Models
- Smoothing Dynamic Systems with State-Dependent Covariance Matrices
- Exploiting Convexity for Large Scale Log-Linear Model Estimation