Image Classification at Supercomputer Scale

11/16/2018
by   Chris Ying, et al.
0

Deep learning is extremely computationally intensive, and hardware vendors have responded by building faster accelerators in large clusters. Training deep learning models at petaFLOPS scale requires overcoming both algorithmic and systems software challenges. In this paper, we discuss three systems-related optimizations: (1) distributed batch normalization to control per-replica batch sizes, (2) input pipeline optimizations to sustain model throughput, and (3) 2-D torus all-reduce to speed up gradient summation. We combine these optimizations to train ResNet-50 on ImageNet to 76.3 on a 1024-chip TPU v3 Pod with a training throughput of over 1.05 million images/second and no accuracy drop.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset