Dooyong Koh, Ethan G. Rogers, Kshemal K. Gupte, Qiuyuan Wang, Brooke C. McGoldrick, Jungsoo Lee, Marc A. Baldo, Cheng Wang, and Luqiao Liu

DOI: https://doi.org/10.1103/s86j-5jcr 

Abstract:

The rapidly growing demand for more efficient artificial intelligence (AI) hardware accelerators remains a pressing challenge. Crossbar arrays have been widely proposed as a promising in-memory computing (IMC) architecture. But conventional nonvolatile-memory-based crossbar arrays inherently require a large number of analog-to-digital converters (ADCs), leading to significant area and energy inefficiencies. Here, we demonstrate three-terminal stochastic magnetic tunnel junctions (sMTJs) operated by spin-orbit torque as alternative interfacial components between the analog and digital domains for next-generation AI accelerators. By harnessing the intrinsic analog-current-to-digital-signal conversion from sMTJs, we replace conventionally bulky, energy-hungry, and slow ADCs with compact, low power, and rapid stochastic current digitizers. Furthermore, a partial-sum approach is leveraged to break down large matrix operations, allowing computational efficiency while demonstrating high accuracy on the Modified National Institute of Standards and Technology handwritten digit dataset. This work offers promising results for realizing innovative IMC architectures leveraging edge devices integral to the future of AI hardware.