Artwork by Sampson Wilcox
Chao Luan, Ronald Davis III, Zaijun Chen, Dirk Englund & Ryan Hamerly
DOI: 10.1038/s41467-026–68452‑x
Abstract:
The ever-increasing data demand craves advancements in high-speed and energy-efficient computing hardware. Analog optical neural network (ONN) processors have emerged as a promising solution, offering benefits in bandwidth and energy consumption. However, existing ONN processors exhibit limited computational parallelism, and while certain architectures achieve high parallelism, they encounter serious scaling up roadblocks for large-scale implementation. Here, we introduce a spatial-wavelength-temporal hyper-multiplexed ONN processor, which is based on parallel diffractive beam routing. The architecture supports high three-dimensional data, high O(N3) computing parallelism, and is feasible for large-scale implementation. A 16 × 16 parallel diffractive beam routing is demonstrated, enabling a large-scale (16 × 16 − by − 16 × 16), high-parallelism (4096 multiply-and-accumulates/shot (MACs/shot)), high-speed (2 Gsa/s), single-shot matrix-matrix multiplication (MMM) optical tensor processor. It accelerates convolutional neural networks (CNNs) and deep neural networks (DNNs) through parallel matrix multiplication. We demonstrate benchmark image recognition using a CNN and a subsequently fully connected DNN in the optical domain. The network works with an ultra-low optical energy of ≈ 20 attojoules (aJ)/MAC at 96.4% classification accuracy. The ONN system supports broad spectral and spatial bandwidths and is capable for large-scale scaling up, paving the way for highly efficient large-scale optical computing for next-generation deep learning.

