SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models
Authors
Dipayan Saha, Shams Tarek, H. A. Shaikh, K. T. Hasan, P. S. Nalluri, M. A. Hasan, N. Alam, Jingbo Zhou, Sujan Kumar Saha, Mark Tehranipoor
Abstract
Ensuring the security of complex system-on-chips (SoCs) designs is a critical imperative, yet traditional verification techniques struggle to keep pace due to significant challenges in automation, scalability, comprehensiveness, and adaptability. We introduce SV-LLM, a novel multi-agent assistant system designed to automate and enhance SoC security verification. By integrating specialized agents for tasks like verification question answering, security asset identification, threat modeling, test plan and property generation, vulnerability detection, and simulation-based bug validation, SV-LLM streamlines the workflow. To optimize their performance in these diverse tasks, agents leverage different learning paradigms, such as in-context learning, fine-tuning, and retrieval-augmented generation (RAG).
If you use this work in your research, please cite the paper.
Direct Citation
D. Saha et al., "SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models," arXiv preprint arXiv:2506.20415, 2025.
BibTex
@misc{saha2025svllm,
title={SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models},
author={Dipayan Saha and Shams Tarek and H. A. Shaikh and K. T. Hasan and P. S. Nalluri and M. A. Hasan and N. Alam and Jingbo Zhou and Sujan Kumar Saha and Mark Tehranipoor},
year={2025},
eprint={2506.20415},
archivePrefix={arXiv},
primaryClass={cs.CR}
}