WatchPed: Pedestrian Crossing Intention Prediction Using Embedded Sensors of Smartwatch
The pedestrian intention prediction problem is to estimate whether or not the target pedestrian will cross the street. State-of-the-art approaches heavily rely on visual information collected with the front camera of the ego-vehicle to make a prediction of the pedestrian's intention. As such, the performance of existing methods significantly degrades when the visual information is not accurate, e.g., when the distance between the pedestrian and ego-vehicle is far, or the lighting conditions are not good enough. In this paper, we design, implement, and evaluate the first pedestrian intention prediction model based on integration of motion sensor data gathered with the smartwatch (or smartphone) of the pedestrian. A novel machine learning architecture is proposed to effectively incorporate the motion sensor data to reinforce the visual information to significantly improve the performance in adverse situations where the visual information may be unreliable. We also conduct a large-scale data collection and present the first pedestrian intention prediction dataset integrated with time-synchronized motion sensor data. The dataset consists of a total of 128 video clips with different distances and varying levels of lighting conditions. We trained our model using the widely-used JAAD and our own datasets and compare the performance with a state-of-the-art model. The results demonstrate that our model outperforms the state-of-the-art method particularly when the distance to the pedestrian is far (over 70m), and the lighting conditions are not sufficient.
READ FULL TEXT