MCR-DL: Mix-and-Match Communication Runtime for Deep Learning

03/15/2023
by   Quentin Anthony, et al.
0

In recent years, the training requirements of many state-of-the-art Deep Learning (DL) models have scaled beyond the compute and memory capabilities of a single processor, and necessitated distribution among processors. Training such massive models necessitates advanced parallelism strategies to maintain efficiency. However, such distributed DL parallelism strategies require a varied mixture of collective and point-to-point communication operations across a broad range of message sizes and scales. Examples of models using advanced parallelism strategies include Deep Learning Recommendation Models (DLRM) and Mixture-of-Experts (MoE). Communication libraries' performance varies wildly across different communication operations, scales, and message sizes. We propose MCR-DL: an extensible DL communication framework that supports all point-to-point and collective operations while enabling users to dynamically mix-and-match communication backends for a given operation without deadlocks. MCR-DL also comes packaged with a tuning suite for dynamically selecting the best communication backend for a given input tensor. We select DeepSpeed-MoE and DLRM as candidate DL models and demonstrate a 31 throughput on 256 V100 GPUs on the Lassen HPC system. Further, we achieve a 20 throughput improvement in a dense Megatron-DeepSpeed model and a 25 improvement in DLRM on 32 A100 GPUs with the Theta-GPU HPC system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset