Robust object detection in extreme construction conditions
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Article number | 105487 |
Journal / Publication | Automation in Construction |
Volume | 165 |
Online published | 14 Jun 2024 |
Publication status | Published - Sept 2024 |
Link(s)
Abstract
Current construction object detection models are vulnerable in complex conditions, as they are trained on conventional data and lack robustness in extreme situations. The lack of extreme data with relevant annotations worsens this situation. A new end-to-end unified image adaptation You-Only-Look-Once-v5 (UIA-YOLOv5) model is presented for robust object detection in five extreme conditions: low/intense light, fog, dust, and rain. The UIA-YOLOv5 adaptively enhances the input image to make image content visually clear and then feeds the enhanced image to the YOLOv5 for object detection. Sufficient extreme images are synthesized via the neural style transfer (NST) and mixed with conventional data for model training to reduce domain shift. An extreme construction dataset (ExtCon) containing 506 images labeled with 13 objects is constructed for real-world evaluation. Results show that the UIA-YOLOv5 keeps the same performance as the YOLOv5 on conventional data but is more robust to extreme data with an 8.21% mAP05 improvement. © 2024 Published by Elsevier B.V.
Research Area(s)
- Construction industry, Robust object detection, Extreme conditions, Image adaptation, Neural style transfer, Extreme construction dataset
Citation Format(s)
Robust object detection in extreme construction conditions. / Ding, Yuexiong; Zhang, Ming; Pan, Jia et al.
In: Automation in Construction, Vol. 165, 105487, 09.2024.
In: Automation in Construction, Vol. 165, 105487, 09.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review