Nandiraju Gireesh1 D. A. Sasi Kiran1 Snehasis Banerjee2 Mohan Sridharan3 Brojeshwar Bhowmick2 Madhava Krishna1
1 Robotics Research Center, IIIT Hyderabad, India 2 TCS Research, Tata Consultancy Services, India 3 Intelligent Robotics Lab, University of Birmingham, UK
Object Goal Navigation requires a robot to find and navigate to an instance of a target object class in a previously unseen environment. Our framework incrementally builds a semantic map of the environment over time, and then repeatedly selects a long-term goal (’where to go’) based on the semantic map to locate the target object instance. Longterm goal selection is formulated as a vision-based deep reinforcement learning problem. Specifically, an Encoder Network is trained to extract high-level features from a semantic map and select a long-term goal. In addition, we incorporate data augmentation and Q-function regularization to make the longterm goal selection more effective. We report experimental results using the photo-realistic Gibson benchmark dataset in the AI Habitat 3D simulation environment to demonstrate substantial performance improvement on standard measures in comparison with a state of the art data-driven baseline.