Adapting Semantic Segmentation Models for Changes in Illumination and Camera Perspective

09/13/2018
by   Wei Zhou, et al.
2

Semantic segmentation using deep neural networks has been widely explored to generate high-level contextual information for autonomous vehicles. To acquire a complete 180^∘ semantic understanding of the forward surroundings, we propose to stitch semantic images from multiple cameras with varying orientations. However, previously trained semantic segmentation models showed unacceptable performance after significant changes to the camera orientations and the lighting conditions. To avoid time-consuming hand labeling, we explore and evaluate the use of data augmentation techniques, specifically skew and gamma correction, from a practical real-world standpoint to extend the existing model and provide more robust performance. The presented experimental results have shown significant improvements with varying illumination and camera perspective changes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset