Investigations on the inference optimization techniques and their impact on multiple hardware platforms for Semantic Segmentation

11/29/2019
by   Sethu Hareesh Kolluru, et al.
0

In this work, the task of pixel-wise semantic segmentation in the context of self-driving with a goal to reduce the inference time is explored. Fully Convolutional Network (FCN-8s, FCN-16s, and FCN-32s) with a VGG16 encoder architecture and skip connections is trained and validated on the Cityscapes dataset. Numerical investigations are carried out for several inference optimization techniques built into TensorFlow and TensorRT to quantify their impact on the inference time and network size. Finally, the trained network is ported on to an embedded platform (Nvidia Jetson TX1) and the inference time, as well as the total energy consumed for inference across hardware platforms, are compared.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset