MegaBlocks: Efficient Sparse Training with Mixture-of-Experts

11/29/2022
by   Trevor Gale, et al.
0

We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE) training on GPUs. Our system is motivated by the limitations of current frameworks, which restrict the dynamic routing in MoE layers to satisfy the constraints of existing software and hardware. These formulations force a tradeoff between model quality and hardware efficiency, as users must choose between dropping tokens from the computation or wasting computation and memory on padding. To address these limitations, we reformulate MoE computation in terms of block-sparse operations and develop new block-sparse GPU kernels that efficiently handle the dynamism present in MoEs. Our approach never drops tokens and maps efficiently to modern hardware, enabling end-to-end training speedups of up to 40 and 2.4x over DNNs trained with the highly-optimized Megatron-LM framework.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset