Vikrant Dewangan∗1 Tushar Choudhary∗1 Shivam Chandhok∗2 Shubham Priyadarshan1 Anushka Jain1 Arun K. Singh3 Siddharth Srivastava4 Krishna Murthy Jatavallabhula†5 K. Madhava Krishna†1
1 Robotics Research Center, IIIT Hyderabad, India 2 University of British Columbia 3 University of Tartu 4 TensorTour Inc 5 MIT Computer Science & Artificial Intelligence Laboratory
This work introduces Talk2BEV, a large vision-language model (LVLM)1 interface for bird’s-eye view (BEV) maps in autonomous driving contexts. While existing perception systems for autonomous driving scenarios have largely focused on a pre-defined (closed) set of object categories and driving scenarios, Talk2BEV blends recent advances in general-purpose language and vision models with BEV-structured map representations, eliminating the need for task-specific models. This enables a single system to cater to a variety of autonomous driving tasks encompassing visual and spatial reasoning, predicting the intents of traffic actors, and decision-making based on visual cues. We extensively evaluate Talk2BEV on a large number of scene understanding tasks that rely on both the ability to interpret free-form natural language queries, and in grounding these queries to the visual context embedded into the language- enhanced BEV map. To enable further research in LVLMs for autonomous driving scenarios, we develop and release Talk2BEV-Bench, a benchmark encompassing 1000 human-annotated BEV scenarios, with more than 20,000 questions and ground-truth responses from the NuScenes dataset.