D. A. Sasi Kiran*1 Kritika Anand*2 Chaitanya Kharyal∗1 Gulshan Kumar1 Nandiraju Gireesh1 Snehasis Banerjee2 Ruddra dev Roychoudhury2 Mohan Sridharan3 Brojeshwar Bhowmick2 Madhava Krishna1
This paper describes a framework for the objectgoal navigation task, which requires a robot to find and move to the closest instance of a target object class from a random starting position. The framework uses a history of robot trajectories to learn a Spatial Relational Graph (SRG) and Graph Convolutional Network (GCN)-based embeddings for the likelihood of proximity of different semantically-labeled regions and the occurrence of different object classes in these regions. To locate a target object instance during evaluation, the robot uses Bayesian inference and the SRG to estimate the visible regions, and uses the learned GCN embeddings to rank visible regions and select the region to explore next. This approach is tested using the Matterport3D benchmark dataset of indoor scenes in AI Habitat, a visually realistic simulation environment, to report substantial performance improvement in comparison with state of the art baselines.