Experience with PCIe streaming on FPGA for high throughput ML inferencing

10/22/2021
by   Piyush Manavar, et al.
0

Achieving maximum possible rate of inferencing with minimum hardware resources plays a major role in reducing enterprise operational costs. In this paper we explore use of PCIe streaming on FPGA based platforms to achieve high throughput. PCIe streaming is a unique capability available on FPGA that eliminates the need for memory copy overheads. We have presented our results for inferences on a gradient boosted trees model, for online retail recommendations. We compare the results achieved with the popular library implementations on GPU and the CPU platforms and observe that the PCIe streaming enabled FPGA implementation achieves the best overall measured performance. We also measure power consumption across all platforms and find that the PCIe streaming on FPGA platform achieves the 25x and 12x better energy efficiency than an implementation on CPU and GPU platforms, respectively. We discuss the conditions that need to be met, in order to achieve this kind of acceleration on the FPGA. Further, we analyze the run time statistics on GPU and FPGA and identify opportunities to enhance performance on both the platforms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset