Energy-Efficient Circuit Design for Low-Power Machine Learning Accelerators

Authors

  • Noman Mazher University of Gujrat Author
  • Areej Mustafa University of Gujrat Author

Keywords:

Energy-efficient circuits, low-power design, machine learning accelerators, approximate computing, memory optimization, voltage scaling

Abstract

The increasing demand for machine learning (ML) applications across mobile, embedded, and edge computing devices has highlighted the importance of energy-efficient circuit design for low-power accelerators. Traditional machine learning accelerators, while powerful, often suffer from high power consumption and thermal inefficiencies that limit their adoption in energy-constrained environments. This paper explores strategies for designing energy-efficient circuits tailored to low-power ML accelerators, focusing on reducing dynamic and static power consumption while maintaining computational throughput. Through a combination of voltage scaling, approximate computing, memory optimization, and innovative circuit-level techniques, this research investigates how energy efficiency can be achieved without significantly compromising model accuracy. Experimental evaluations conducted using a custom-designed hardware prototype and simulated workloads from convolutional neural networks (CNNs) and transformer models demonstrate notable reductions in energy consumption, achieving up to 45% improvement over baseline accelerator designs. The findings provide insights into balancing efficiency, scalability, and accuracy in next-generation machine learning hardware.

Downloads

Published

2024-01-04