Analog/Mixed-Signal Computing for AI

Artificial Neural Networks (ANNs) have achieved remarkable success across various machine learning tasks, including natural language processing, speech recognition, and image classification. However, ANN hardware accelerators typically depend on highly parallel multiply-and-accumulate (MAC) operations, which generate substantial intermediate data. In conventional von Neumann architectures, frequent data transfers between the processing elements and memory result in significant energy inefficiency and latency. These challenges are particularly pronounced in complex models with high bit precision, restricting their deployment in edge devices.

Analog/mixed-signal (AMS) computing has emerged as a promising alternative for implementing ANNs. Analog computation offers higher energy efficiency compared to digital computation in low signal-to-noise ratio (SNR) scenarios. Furthermore, many applications of MAC operations require relatively low precision, making AMS an attractive option for edge AI solutions.

This research aims to design, optimize, and experimentally demonstrate ANN hardware accelerators for energy-constrained edge applications leveraging CMOS and beyond-CMOS technologies.