Learning to prevent Monocular SLAM failure using Reinforcement Learning
Vignesh Prasad*1, Karmesh Yadav*2, Saurabh Singh3, Swapnil Daga4 Nahas Pareekutty4, K. Madhava Krishna4, Balaraman Ravindran5, Brojeshwar Bhowmick1
1 Embedded Systems and Robotics, TCS Research and Innovation Kolkata, India
2 Robotics Institute, Carnegie Mellon University, Pittsburgh, USA
3 Dept. of Mechanical Engineering, John Hopkins University
4 Robotics Research Centre, International Institute of Information Technology, Hyderabad
5 The Department of Computer Science and Engineering, Indian Institute of Technology Madras
Abstract
Monocular SLAM refers to using a single camera to estimate robot ego motion while building a map of the environment. While Monocular SLAM is a well studied problem, automating Monocular SLAM by integrating it with trajectory planning frameworks is particularly challenging. This paper presents a novel formulation based on Reinforcement Learning (RL) that generates fail safe trajectories wherein the SLAM generated outputs do not deviate largely from their true values. Quintessentially, the RL framework successfully learns the otherwise complex relation between perceptual inputs and motor actions and uses this knowledge to generate trajectories that do not cause failure of SLAM. We show systematically in simulations how the quality of the SLAM dramatically improves when trajectories are computed using RL. Our method scales effectively across Monocular SLAM frameworks in both simulation and in real world experiments with a mobile robot.
[Paper]
[Older arXiv Version]
You can also checkout our work on using Inverse RL to approach the problem here which is acceped at AAMAS'17 as an Extended Abstract.