Fully Nested Neural Network for Adaptive Compression and Quantization
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence |
Editors | Christian Bessiere |
Publisher | International Joint Conferences on Artificial Intelligence |
Pages | 2080-2087 |
ISBN (electronic) | 978-0-9992411-6-5 |
Publication status | Published - Jan 2021 |
Publication series
Name | IJCAI International Joint Conference on Artificial Intelligence |
---|---|
Volume | 2021-January |
ISSN (Print) | 1045-0823 |
Conference
Title | 29th International Joint Conference on Artificial Intelligence (IJCAI 2020) |
---|---|
Location | Virtual |
Place | Japan |
City | Yokohama |
Period | 7 - 15 January 2021 |
Link(s)
Abstract
Neural network compression and quantization are important tasks for fitting state-of-the-art models into the computational, memory and power constraints of mobile devices and embedded hardware. Recent approaches to model compression/quantization are based on reinforcement learning or search methods to quantize the neural network for a specific hardware platform. However, these methods require multiple runs to compress/quantize the same base neural network to different hardware setups. In this work, we propose a fully nested neural network (FN3) that runs only once to build a nested set of compressed/quantized models, which is optimal for different resource constraints. Specifically, we exploit the additive characteristic in different levels of building blocks in neural network and propose an ordered dropout (ODO) operation that ranks the building blocks. Given a trained FN3, a fast heuristic search algorithm is run offline to find the optimal removal of components to maximize the accuracy under different constraints. Compared with the related works on adaptive neural network designed only for channels or bits, the proposed approach is applicable to different levels of building blocks (bits, neurons, channels, residual paths and layers). Empirical results validate strong practical performance of proposed approach.
Citation Format(s)
Fully Nested Neural Network for Adaptive Compression and Quantization. / Cui, Yufei; Liu, Ziquan; Yao, Wuguannan et al.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. ed. / Christian Bessiere. International Joint Conferences on Artificial Intelligence, 2021. p. 2080-2087 (IJCAI International Joint Conference on Artificial Intelligence; Vol. 2021-January).
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. ed. / Christian Bessiere. International Joint Conferences on Artificial Intelligence, 2021. p. 2080-2087 (IJCAI International Joint Conference on Artificial Intelligence; Vol. 2021-January).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review