An Efficient FPGA-based Depthwise Separable Convolutional Neural Network Accelerator with Hardware Pruning
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Article number | 15 |
Journal / Publication | ACM Transactions on Reconfigurable Technology and Systems |
Volume | 17 |
Issue number | 1 |
Online published | 12 Feb 2024 |
Publication status | Published - Mar 2024 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85197777088&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(95cafd33-cff7-43ad-8747-97742d96b9c6).html |
Abstract
Convolutional neural networks (CNNs) have been widely deployed in computer vision tasks. However, the computation and resource intensive characteristics of CNN bring obstacles to its application on embedded systems. This article proposes an efficient inference accelerator on Field Programmable Gate Array (FPGA) for CNNs with depthwise separable convolutions. To improve the accelerator efficiency, we make four contributions: (1) an efficient convolution engine with multiple strategies for exploiting parallelism and a configurable adder tree are designed to support three types of convolution operations; (2) a dedicated architecture combined with input buffers is designed for the bottleneck network structure to reduce data transmission time; (3) a hardware padding scheme to eliminate invalid padding operations is proposed; and (4) a hardware-assisted pruning method is developed to support online tradeoff between model accuracy and power consumption. Experimental results show that for MobileNetV2 the accelerator achieves 10× and 6× energy efficiency improvement over the CPU and GPU implementation, and 302.3 frames per second and 181.8 GOPS performance that is the best among several existing single-engine accelerators on FPGAs. The proposed hardware-assisted pruning method can effectively reduce 59.7% power consumption at the accuracy loss within 5%. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Research Area(s)
- bottleneck, CNN accelerator, depthwise-seperable convolution, model compression
Citation Format(s)
An Efficient FPGA-based Depthwise Separable Convolutional Neural Network Accelerator with Hardware Pruning. / LIU, Zhengyan; LIU, Qiang; YAN, Shun et al.
In: ACM Transactions on Reconfigurable Technology and Systems, Vol. 17, No. 1, 15, 03.2024.
In: ACM Transactions on Reconfigurable Technology and Systems, Vol. 17, No. 1, 15, 03.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available