Shivam Singh1 Karthik Swaminathan1 Nabanita Dash1 Ramandeep Singh1 Snehasis Banerjee2 Mohan Sridharan3 K Madhava Krishna1
1 Robotics Research Center, IIIT Hyderabad, India 2 TCS Research, Tata Consultancy Services, India 3 School of Informatics, University of Edinburgh, UK
An embodied agent assisting humans is often asked to complete new tasks, and there may not be sufficient time or labeled examples to train the agent to perform these new tasks. Large Language Models (LLMs) trained on considerable knowledge across many domains can be used to predict a sequence of abstract actions for completing such tasks, although the agent may not be able to execute this sequence due to task-, agent-, or domain-specific constraints. Our framework addresses these challenges by leveraging the generic predictions provided by LLM and the prior domain knowledge encoded in a Knowledge Graph (KG), enabling an agent to quickly adapt to new tasks. The robot also solicits and uses human input as needed to refine its existing knowledge. Based on experimental evaluation in the context of cooking and cleaning tasks in simulation domains, we demonstrate that the interplay between LLM, KG, and human input leads to substantial performance gains compared with just using the LLM.