Classification with Deep Neural Networks and Logistic Loss
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Article number | 125 |
Journal / Publication | Journal of Machine Learning Research |
Volume | 25 |
Online published | Apr 2024 |
Publication status | Published - 2024 |
Link(s)
Attachment(s) | Documents
Publisher's Copyright Statement
|
---|---|
Document Link | Links
|
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(a4d41f07-7b87-40bd-a6fd-d82d9a256c0d).html |
Abstract
Deep neural networks (DNNs) trained with the logistic loss (also known as the cross entropy loss) have made impressive advancements in various binary classification tasks. Despite the considerable success in practice, generalization analysis for binary classification with deep neural networks and the logistic loss remains scarce. The unboundedness of the target function for the logistic loss in binary classification is the main obstacle to deriving satisfactory generalization bounds. In this paper, we aim to fill this gap by developing a novel theoretical analysis and using it to establish tight generalization bounds for training fully connected ReLU DNNs with logistic loss in binary classification. Our generalization analysis is based on an elegant oracle-type inequality which enables us to deal with the boundedness restriction of the target function. Using this oracle-type inequality, we establish generalization bounds for fully connected ReLU DNN classifiers ƒnFNN trained by empirical logistic risk minimization with respect to i.i.d. samples of size nn, which lead to sharp rates of convergence as n → ∞. In particular, we obtain optimal convergence rates for ƒnFNN (up to some logarithmic factor) only requiring the Hölder smoothness of the conditional class probability η of data. Moreover, we consider a compositional assumption that requires η to be the composition of several vector-valued multivariate functions of which each component function is either a maximum value function or a Hölder smooth function only depending on a small number of its input variables. Under this assumption, we can even derive optimal convergence rates for ƒnFNN (up to some logarithmic factor) which are independent of the input dimension of data. This result explains why in practice DNN classifiers can overcome the curse of dimensionality and perform well in high-dimensional classification problems. Furthermore, we establish dimension-free rates of convergence under other circumstances such as when the decision boundary is piecewise smooth and the input data are bounded away from it. Besides the novel oracle-type inequality, the sharp convergence rates presented in our paper also owe to a tight error bound for approximating the natural logarithm function near zero (where it is unbounded) by ReLU DNNs. In addition, we justify our claims for the optimality of rates by proving corresponding minimax lower bounds. All these results are new in the literature and will deepen our theoretical understanding of classification with deep neural networks. © 2024 Tian-Yi Zhou and Xiaoming Huo.
Research Area(s)
- deep learning, deep neural networks, binary classification, logistic loss, generalization analysis
Citation Format(s)
Classification with Deep Neural Networks and Logistic Loss. / Zhang, Zihan; Shi, Lei; Zhou, Ding-Xuan.
In: Journal of Machine Learning Research, Vol. 25, 125, 2024.
In: Journal of Machine Learning Research, Vol. 25, 125, 2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available