Abstract
Optical neural network (ONNs) are emerging as attractive proposals for machine-learning applications. However, the stability of ONNs decreases with the circuit depth, limiting the scalability of ONNs for practical uses. Here we demonstrate how to compress the circuit depth to scale only logarithmically in terms of the dimension of the data, leading to an exponential gain in terms of noise robustness. Our low-depth (LD)-ONN is based on an architecture, called Optical CompuTing Of dot-Product UnitS (OCTOPUS), which can also be applied individually as a linear perceptron for solving classification problems. We present both numerical and theoretical evidence showing that LD-ONN can exhibit a significant improvement on robustness, compared with previous ONN proposals based on singular-value decomposition. © 2021 The Author(s)
| Original language | English |
|---|---|
| Article number | 100002 |
| Journal | Chip |
| Volume | 1 |
| Issue number | 1 |
| Online published | 31 Jan 2022 |
| DOIs | |
| Publication status | Published - Mar 2022 |
Research Keywords
- machine learning
- optical neural networks
- photonic chip
Publisher's Copyright Statement
- This full text is made available under CC-BY-NC-ND 4.0. https://creativecommons.org/licenses/by-nc-nd/4.0/