WIP: From Detection to Explanation: Using LLMs for Adversarial Scenario Analysis in Vehicles

Aug 1, 2025·
David Fernandez
David Fernandez
,
Pedram MohajerAnsari
,
Cigdem Kokenoz
,
Amir Salarpour
,
Bing Li
,
Mert D. Pese
· 0 min read
Type
Publication
3rd USENIX Symposium on Vehicle Security and Privacy (VehicleSec 2025)
publications
David Fernandez
Authors
PhD Candidate in Computer Science

David Fernandez is a PhD candidate in Computer Science at Clemson University, working on safe, efficient, and explainable AI for safety-critical systems. His research spans perception, adversarial robustness, and on-device deployment of large foundation models — including LLMs and VLMs — with five first-authored publications on component-level explainability, zero-shot reasoning, and adversarial scenario analysis, alongside collaborative work on edge AI for industrial agentic systems. Much of this research is grounded in autonomous driving, where trustworthiness, latency, and robustness constraints are unforgiving, but the underlying methods transfer broadly to other high-stakes domains.

As a member of Clemson’s VIPR-GS Research Program, he develops hierarchical LLM reasoning frameworks and VLM evaluation systems for the U.S. Army’s Next Generation Combat Vehicle (NGCV) program. At BMW Group, he builds AI production security frameworks and edge deployment systems. His work focuses on robust, interpretable AI that bridges rigorous research and real-world deployment.