Integrating Security Primitives in Machine Learning Hardware for Trusted AI Systems

Authors

  • Zunaira Rafaqat Chenab Institute of Information Technology Author
  • Areeba Sohail Chenab Institute of Information Technology Author

Keywords:

Machine Learning Hardware, Security Primitives, Trusted AI, Hardware Security, Side-Channel Attacks, Trusted Execution Environment

Abstract

As machine learning (ML) systems become deeply embedded in critical infrastructures, ranging from healthcare diagnostics to financial risk assessment and autonomous vehicles, the trustworthiness of the underlying hardware emerges as a decisive factor in ensuring system integrity. Conventional ML hardware accelerators focus heavily on performance and energy efficiency but often overlook the fundamental role of hardware security. This oversight exposes AI systems to vulnerabilities such as data poisoning, model theft, side-channel leakage, and hardware Trojans. This paper explores the integration of security primitives—cryptographic modules, physically unclonable functions (PUFs), trusted execution environments (TEEs), and secure boot mechanisms—directly into ML hardware design to build trusted AI systems. Through detailed analysis and experiments on a hardware-accelerated ML prototype enhanced with security primitives, the results indicate significant improvements in resistance to adversarial interference and unauthorized access, while maintaining competitive performance metrics. The research highlights how embedding security primitives at the hardware level can shift the paradigm from performance-driven AI hardware toward resilient and trustworthy AI infrastructures.

Downloads

Published

2024-04-24