TorontoCity: Seeing the World with a Million Eyes

12/01/2016
by   Shenlong Wang, et al.
0

In this paper we introduce the TorontoCity benchmark, which covers the full greater Toronto area (GTA) with 712.5 km^2 of land, 8439 km of road and around 400,000 buildings. Our benchmark provides different perspectives of the world captured from airplanes, drones and cars driving around the city. Manually labeling such a large scale dataset is infeasible. Instead, we propose to utilize different sources of high-precision maps to create our ground truth. Towards this goal, we develop algorithms that allow us to align all data sources with the maps while requiring minimal human supervision. We have designed a wide variety of tasks including building height estimation (reconstruction), road centerline and curb extraction, building instance segmentation, building contour extraction (reorganization), semantic labeling and scene type classification (recognition). Our pilot study shows that most of these tasks are still difficult for modern convolutional neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset