Nivedita Rufus1 Unni Krishnan R Nair1 K. Madhava Krishna1 Vineet Gandhi1
In this paper, we present a simple baseline for visual grounding for autonomous driving which outperforms the state of the art methods, while retaining minimal design choices. Our framework minimizes the cross-entropy loss over the cosine distance between multiple image ROI features with a text embedding (representing the give sentence/phrase). We use pre-trained networks for obtaining the initial embeddings and learn a transformation layer on top of the text embedding. We perform experiments on the Talk2Car [7] dataset and achieve 68.7% AP50 accuracy, improving upon the previous state of the art [6] by 8.6%. Our investigation suggests reconsideration towards more approaches employing sophisticated attention mechanisms [13] or multi-stage reasoning [6] or complex metric learning loss functions [18] by showing promise in simpler alternatives.