CLIPGraphs: Multimodal Graph Networks to Infer Object-Room Affinities

Ayush Agrawal∗1    Raghav Arora∗1    Ahana Datta1    Snehasis Banerjee1, 2    Brojeshwar Bhowmick2    Krishna Murthy Jatavallabhula3    Mohan Sridharan4    Madhava Krishna1   

1 IIIT Hyderabad, India    2 TCS Research, Tata Consultancy Services, India    3 CSAIL, Massachusetts Institute of Technology, USA    4 Intelligent Robotics Lab, University of Birmingham, UK   



This paper introduces a novel method for determining the best room to place an object in, for embodied scene rearrangement. While state-of-the-art approaches rely on large language models (LLMs) or reinforcement learned (RL) policies for this task, our approach, CLIPGraphs, efficiently combines commonsense domain knowledge, data-driven methods, and recent advances in multimodal learning. Specifically, it (a) encodes a knowledge graph of prior human preferences about the room location of different objects in home environments, (b) incorporates vision-language features to support multimodal queries based on images or text, and (c) uses a graph network to learn object-room affinities based on embeddings of the prior knowledge and the vision-language features. We demonstrate that our approach provides better estimates of the most appropriate location of objects from a benchmark set of object categories in comparison with state-of-the-art baselines.