Answer Fast: Accelerating BERT on the Tensor Streaming Processor

06/22/2022
by   Ibrahim Ahmed, et al.
0

Transformers have become a predominant machine learning workload, they are not only the de-facto standard for natural language processing tasks, but they are also being deployed in other domains such as vision and speech recognition. Many of the transformer-based applications are real-time systems such as machine translation and web search. These real time systems often come with strict end-to-end inference latency requirements. Unfortunately, while the majority of the transformer computation comes from matrix multiplications, transformers also include several non-linear components that tend to become the bottleneck during an inference. In this work, we accelerate the inference of BERT models on the tensor streaming processor. By carefully fusing all the nonlinear components with the matrix multiplication components, we are able to efficiently utilize the on-chip matrix multiplication units resulting in a deterministic tail latency of 130 μs for a batch-1 inference through BERT-base, which is 6X faster than the current state-of-the-art.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset