Efficient Memory Management for Deep Neural Net Inference

01/10/2020
by   Yury Pisarchyk, et al.
0

While deep neural net inference was considered a task for servers only, latest advances in technology allow the task of inference to be moved to mobile and embedded devices, desired for various reasons ranging from latency to privacy. These devices are not only limited by their compute power and battery, but also by their inferior physical memory and cache, and thus, an efficient memory manager becomes a crucial component for deep neural net inference at the edge. In this paper, we explore various strategies to smartly share memory buffers among intermediate tensors in deep neural networks. Employing these can result in up to 10.5x smaller memory footprint than running inference without one and up to 11

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset