Variational Nested Dropout
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 10519-10534 |
Journal / Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 45 |
Issue number | 8 |
Online published | 20 Feb 2023 |
Publication status | Published - Aug 2023 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85149364694&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(1d5d9551-73de-46ad-a98c-b961e0e012e4).html |
Abstract
Nested dropout is a variant of dropout operation that is able to order network parameters or features based on the pre-defined importance during training. It has been explored for: I. Constructing nested nets [11], [10]: the nested nets are neural networks whose architectures can be adjusted instantly during testing time, e.g., based on computational constraints. The nested dropout implicitly ranks the network parameters, generating a set of sub-networks such that any smaller sub-network forms the basis of a larger one. II. Learning ordered representation [48]: the nested dropout applied to the latent representation of a generative model (e.g., auto-encoder) ranks the features, enforcing explicit order of the dense representation over dimensions. However, the dropout rate is fixed as a hyper-parameter during the whole training process. For nested nets, when network parameters are removed, the performance decays in a human-specified trajectory rather than in a trajectory learned from data. For generative models, the importance of features is specified as a constant vector, restraining the flexibility of representation learning. To address the problem, we focus on the probabilistic counterpart of the nested dropout. We propose a variational nested dropout (VND) operation that draws samples of multi-dimensional ordered masks at a low cost, providing useful gradients to the parameters of nested dropout. Based on this approach, we design a Bayesian nested neural network that learns the order knowledge of the parameter distributions. We further exploit the VND under different generative models for learning ordered latent distributions. In experiments, we show that the proposed approach outperforms the nested network in terms of accuracy, calibration, and out-of-domain detection in classification tasks. It also outperforms the related generative models on data generation tasks. © 2023 IEEE.
Research Area(s)
- Bayes methods, Bayesian Neural Netowrk, Computational modeling, Costs, Dropout, Indexes, Model Compression, Representation learning, Slimmable Neural Network, Training, Uncertainty, Uncertainty Estimation, Variational Autoencoder
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
Variational Nested Dropout. / Cui, Yufei; Mao, Yu; Liu, Ziquan et al.
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, No. 8, 08.2023, p. 10519-10534.
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, No. 8, 08.2023, p. 10519-10534.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available