Trusting the Machine: How Secure is LLM-Generated RTL Code?

Authors

Zahin Ibnat, Paul E. Calzada, Dipayan Saha, H. Al-Shaikh, Sujan Kumar Saha, Jingbo Zhou, Farimah Farahmandi, Mark Tehranipoor

Abstract

With the increasing integration of large language models (LLMs) into the IC design flow for hardware design automation, there's a growing concern about the security of the generated Register Transfer Level (RTL) code. Current LLMs often fail to incorporate security considerations during design generation. An investigation revealed that RTL code produced by leading LLMs can contain significant security flaws, such as improper handling of registers and nonsecure datapaths. Out of 42 LLM-generated designs across 7 different models, 74% were found to have at least one security vulnerability. This work highlights that while AI-driven automation offers substantial benefits, the generated RTL designs are not inherently secure and recommends specific mitigation mechanisms.

Read Paper

If you use this work in your research, please cite the paper.

Direct Citation

Z. Ibnat et al., "Trusting the Machine: How Secure is LLM-Generated RTL Code?" in 2025 ACM/IEEE 7th Symposium on Machine Learning for CAD (MLCAD), 2025.

BibTex

@inproceedings{ibnat2025trusting,
  title={Trusting the Machine: How Secure is LLM-Generated RTL Code?},
  author={Ibnat, Zahin and Calzada, Paul E and Saha, Dipayan and Al-Shaikh, H and Saha, Sujan Kumar and Zhou, Jingbo and Farahmandi, Farimah and Tehranipoor, Mark},
  booktitle={2025 ACM/IEEE 7th Symposium on Machine Learning for CAD (MLCAD)},
  year={2025},
  organization={IEEE}
}