Quantum speed-up in global optimization of binary neural nets

Yidong Liao, Daniel Ebler, Feiyang Liu, Oscar Dahlsten*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

4 Citations (Scopus)
187 Downloads (CityUHK Scholars)

Abstract

The performance of a neural network (NN) for a given task is largely determined by the initial calibration of the network parameters. Yet, it has been shown that the calibration, also referred to as training, is generally NP-complete. This includes networks with binary weights, an important class of networks due to their practical hardware implementations. We therefore suggest an alternative approach to training binary NNs. It utilizes a quantum superposition of weight configurations. We show that the quantum training guarantees with high probability convergence towards the globally optimal set of network parameters. This resolves two prominent issues of classical training: (1) the vanishing gradient problem and (2) common convergence to sub-optimal network parameters. We prove that a solution is found after approximately 4nlog(n/δ)√Ñ calls to a comparing oracle, where δ represents a precision, n is the number of training inputs and Ñ is the number of weight configurations. We give the explicit algorithm and implement it in numerical simulations.
Original languageEnglish
Article number063013
JournalNew Journal of Physics
Volume23
Issue number6
Online published7 Jun 2021
DOIs
Publication statusPublished - Jun 2021
Externally publishedYes

Research Keywords

  • binary neural nets
  • quantum computation
  • quantum neural nets
  • unitary training

Publisher's Copyright Statement

  • This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/

Fingerprint

Dive into the research topics of 'Quantum speed-up in global optimization of binary neural nets'. Together they form a unique fingerprint.

Cite this