Deep Inverse Reinforcement Learning for Route Choice Modeling

06/18/2022
by   Zhan Zhao, et al.
0

Route choice modeling, i.e., the process of estimating the likely path that individuals follow during their journeys, is a fundamental task in transportation planning and demand forecasting. Classical methods generally adopt the discrete choice model (DCM) framework with linear utility functions and high-level route characteristics. While several recent studies have started to explore the applicability of deep learning for travel choice modeling, they are all path-based with relatively simple model architectures and cannot take advantage of detailed link-level features. Existing link-based models, while theoretically promising, are generally not as scalable or flexible enough to account for the destination characteristics. To address these issues, this study proposes a general deep inverse reinforcement learning (IRL) framework for link-based route choice modeling, which is capable of incorporating high-dimensional features and capturing complex relationships. Specifically, we adapt an adversarial IRL model to the route choice problem for efficient estimation of destination-dependent reward and policy functions. Experiment results based on taxi GPS data from Shanghai, China validate the improved performance of the proposed model over conventional DCMs and other imitation learning baselines, even for destinations unseen in the training data. We also demonstrate the model interpretability using explainable AI techniques. The proposed methodology provides a new direction for future development of route choice models. It is general and should be adaptable to other route choice problems across different modes and networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset