Multimodal Interaction-aware Motion Prediction for Autonomous Street Crossing

08/21/2018
by   Noha Radwan, et al.
0

For mobile robots sharing the space with humans, acting in accordance with the behavioral norms is a critical prerequisite for their deployment and ease of adoption. One particular challenge for robots navigating in urban environments is the ability to handle street intersections. While the most commonly employed approach to safely cross the road primarily relies on predicting the state of the traffic light, failure to accurately recognize the signal can lead to catastrophic outcomes. Furthermore, the problem becomes even more challenging at unsignalized intersections. In order to address these challenges, we propose a multimodal convolutional neural network framework to predict the safety of a street intersection for crossing. Our architecture consists of two subnetworks; an interaction aware trajectory estimation stream, IA-TCNN that predicts the future states of all observed traffic participants in the scene, and a traffic light recognition stream, AtteNet. IA-TCNN utilizes dilated causal convolutions to model the behavior of all the observable dynamic agents in the scene without explicitly assigning priorities to the interactions among them. While AtteNet utilizes Squeeze-Excitation blocks to learn a content-aware mechanism for selecting the relevant features from the data, thereby improving the noise robustness. Learned representations from the traffic light recognition stream are fused with the estimated trajectories from the motion prediction stream to learn the crossing decision. Extensive experimental evaluations on public benchmark datasets and our proposed Freiburg Street Crossing dataset demonstrate that our network achieves state-of-the-art performance for each of the subtasks, as well as for the crossing safety prediction.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset