[Publication] ICR-Drive Accepted to CVPR 2026 Workshop
Our lab PhD student, Kaiser Hamid, has had his paper, “ICR-Drive: Instruction Counterfactual Robustness for End-to-End Language-Driven Autonomous Driving,” accepted to the CVPR 2026 Workshop on Deployment of Foundation Models for Embodied AI (WDFM-EAI).
This work was conducted with Can Cui from Bosch AI Research and Dr. Nade Liang.
ICR-Drive introduces a diagnostic framework for evaluating instruction counterfactual robustness in language-conditioned autonomous driving. By generating controlled instruction variations—including paraphrase, ambiguity, noise, and misleading cases—and evaluating them under identical CARLA simulation settings, the framework isolates the impact of language on driving performance.
Our results show that even small changes in natural-language instructions can lead to substantial performance differences and distinct failure modes, highlighting an important reliability challenge for embodied foundation models in safety-critical driving.
Project page: ICR-Drive