NIPS 2017 oral — “TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning” in Facebook.
机器之心 online streaming — “Lifting Efficiency in Deep Learning – For both Training and Inference” [video]
Great Lectures, Tutorials and Talks
- Large-Scale Optimization: Beyond Stochastic Gradient Descent and Convexity @ NIPS 2016
- Machine Learning Lectures by Nando de Freitas
- NIPS 2017 video list
- Theories of Deep Learning (STATS 385) by Stanford University
- NIPS 2017 Workshop: Deep Learning At Supercomputer Scale