Understanding Dynamic Scenes using Graph Convolution Networks

Sravan Mylavarapu*1    Mahtab Sandhu*1    Priyesh Vijayan1    K Madhava Krishna1    Balaraman Ravindran1    Anoop Namboodiri1   

1 Center for Visual Information Technology, KCIS, IIIT Hyderabad    2 Robotics Research Center, KCIS, IIIT Hyderabad    3 School of Computer Science, McGill University and Mila    4 Dept. of CSE and Robert Bosch Center for Data Science and AI, IIT Madras   



We present a novel Multi-Relational Graph Convolutional Network (MRGCN) based framework to model on-road vehicle behaviors from a sequence of temporally ordered frames as grabbed by a moving monocular camera. The input to MRGCN is a multi-relational graph where the graph's nodes represent the active and passive agents/objects in the scene, and the bidirectional edges that connect every pair of nodes are encodings of their Spatio-temporal relations. We show that this proposed explicit encoding and usage of an intermediate spatio-temporal interaction graph to be well suited for our tasks over learning end-end directly on a set of temporally ordered spatial relations. We also propose an attention mechanism for MRGCNs that conditioned on the scene dynamically scores the importance of information from different interaction types. The proposed framework achieves significant performance gain over prior methods on vehicle-behavior classification tasks on four datasets. We also show a seamless transfer of learning to multiple datasets without resorting to fine-tuning. Such behavior prediction methods find immediate relevance in a variety of navigation tasks such as behavior planning, state estimation, and applications relating to the detection of traffic violations over videos.