Active Robotic Mapping through Deep Reinforcement Learning
We propose an approach to learning agents for active robotic mapping, where the goal is to map the environment as quickly as possible. The agent learns to map efficiently in simulated environments by receiving rewards corresponding to how fast it constructs an accurate map. In contrast to prior work, this approach learns an exploration policy based on a user-specified prior over environment configurations and sensor model, allowing it to specialize to the specifications. We evaluate the approach through a simulated Disaster Mapping scenario and find that it achieves performance slightly better than a near-optimal myopic exploration scheme, suggesting that it could be useful in more complicated problem scenarios.
READ FULL TEXT