Mixed-Signal AI Hardware Accelerators

Artificial neural networks (ANNs) have enabled major advances in tasks such as image recognition, speech processing, and natural language understanding. However, their hardware implementation is often dominated by the cost of data movement between memory and processing units. In conventional von Neumann architectures, repeated transfers of weights and activations between memory and compute engines result in substantial energy consumption and latency, limiting the efficiency of AI systems, particularly for edge devices with strict power and resource constraints.

Analog and mixed-signal (AMS) computing provides an alternative approach by performing computation directly in the physical domain using device and circuit dynamics. By leveraging the intrinsic properties of electronic devices, AMS architectures can execute operations such as multiply-and-accumulate with significantly improved energy efficiency compared to purely digital implementations.

Our research explores device-to-system approaches for energy-efficient AI hardware, with a particular focus on time-domain computing and in-memory computing architectures using emerging non-volatile memory technologies such as ferroelectric FETs. We design and prototype mixed-signal AI accelerators that combine novel devices, circuit techniques, and architecture-level innovations to enable scalable, low-power AI systems for edge applications.
Relevant Publications:
  1. J. Mattar, M. M. Dahan, S. Dünkel, H. Mulaosmanovic, G. Beernik, S. Beyer, E. Yalon, and N. Wainstein, “A Reconfigurable Time-Domain In-Memory Computing Macro using FeFET-based CAM with Multilevel Delay Calibration in 28-nm CMOS,” IEEE Transactions on Circuits and Systems I: Regular Papers, 2026 (in press). 
  2. J. Mattar, M. M. Dahan, S. Dünkel, H. Mulaosmanovic, G. Beernik, S. Beyer, E. Yalon, and N. Wainstein, “A FeFET CAM-Based Time-Domain In-Memory Computing Macro with 550 ps Delay Step in 28 nm CMOS,” IEEE Nonvolatile Memory Technology Symposium (NVMTS), September 2025. Best Poster Award 🏆.
  3. J. Mattar, M. M. Dahan, S. Dünkel, H. Mulaosmanovic, S. Beyer, E. Yalon, and N. Wainstein, “FeFET-Based Time-Domain In-Memory Computing Macro with Tunable Delay Calibration,” IEEE Device Research Conference (DRC), June 2025. Nominated for Best Student Oral Presentation Award. 
  4. K. Stern, N. Wainstein, Y. Keller, C. Neumann, E. Pop, S. Kvatinsky, and E. Yalon, “Sub-Nanosecond Pulses Enable Partial Reset for Analog Phase Change Memory,” IEEE Electron Device Letters, vol. 42, no. 9, pp. 1291-1294, September 2021. (Paper)
  5. L. Danial, E. Pikhay, E. Herbelin, N. Wainstein, V. Gupta, N. Wald, Y. Roizin, R. Daniel,
    and S. Kvatinsky, “Two-terminal floating-gate transistors with a low-power memristive operation mode for analogue neuromorphic computing,” Nature Electronics, vol. 2, pp. 596-605, December 2019. (Paper)