Hera: A Heterogeneity-Aware Multi-Tenant Inference Server for Personalized Recommendations

02/23/2023
by   Yujeong Choi, et al.
0

While providing low latency is a fundamental requirement in deploying recommendation services, achieving high resource utility is also crucial in cost-effectively maintaining the datacenter. Co-locating multiple workers of a model is an effective way to maximize query-level parallelism and server throughput, but the interference caused by concurrent workers at shared resources can prevent server queries from meeting its SLA. Hera utilizes the heterogeneous memory requirement of multi-tenant recommendation models to intelligently determine a productive set of co-located models and its resource allocation, providing fast response time while achieving high throughput. We show that Hera achieves an average 37.3 utilization, enabling 26 improving upon the baseline recommedation inference server.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset