Traffic Sign Detection With Event Cameras and DCNN
In recent years, event cameras (DVS - Dynamic Vision Sensors) have been used in vision systems as an alternative or supplement to traditional cameras. They are characterised by high dynamic range, high temporal resolution, low latency, and reliable performance in limited lighting conditions – parameters that are particularly important in the context of advanced driver assistance systems (ADAS) and self-driving cars. In this work, we test whether these rather novel sensors can be applied to the popular task of traffic sign detection. To this end, we analyse different representations of the event data: event frame, event frequency, and the exponentially decaying time surface, and apply video frame reconstruction using a deep neural network called FireNet. We use the deep convolutional neural network YOLOv4 as a detector. For particular representations, we obtain a detection accuracy in the range of 86.9-88.9 mAP@0.5. The use of a fusion of the considered representations allows us to obtain a detector with higher accuracy of 89.9 detector for the frames reconstructed with FireNet is characterised by an accuracy of 72.67 event cameras in automotive applications, either as standalone sensors or in close cooperation with typical frame-based cameras.
READ FULL TEXT