Abstract
The performance of a neural network (NN) for a given task is largely determined by the initial calibration of the network parameters. Yet, it has been shown that the calibration, also referred to as training, is generally NP-complete. This includes networks with binary weights, an important class of networks due to their practical hardware implementations. We therefore suggest an alternative approach to training binary NNs. It utilizes a quantum superposition of weight configurations. We show that the quantum training guarantees with high probability convergence towards the globally optimal set of network parameters. This resolves two prominent issues of classical training: (1) the vanishing gradient problem and (2) common convergence to sub-optimal network parameters. We prove that a solution is found after approximately 4n2 log(n/δ)√Ñ calls to a comparing oracle, where δ represents a precision, n is the number of training inputs and Ñ is the number of weight configurations. We give the explicit algorithm and implement it in numerical simulations.
| Original language | English |
|---|---|
| Article number | 063013 |
| Journal | New Journal of Physics |
| Volume | 23 |
| Issue number | 6 |
| Online published | 7 Jun 2021 |
| DOIs | |
| Publication status | Published - Jun 2021 |
| Externally published | Yes |
Research Keywords
- binary neural nets
- quantum computation
- quantum neural nets
- unitary training
Publisher's Copyright Statement
- This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/