Robust object detection in extreme construction conditions

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Article number105487
Journal / PublicationAutomation in Construction
Volume165
Online published14 Jun 2024
Publication statusPublished - Sept 2024

Abstract

Current construction object detection models are vulnerable in complex conditions, as they are trained on conventional data and lack robustness in extreme situations. The lack of extreme data with relevant annotations worsens this situation. A new end-to-end unified image adaptation You-Only-Look-Once-v5 (UIA-YOLOv5) model is presented for robust object detection in five extreme conditions: low/intense light, fog, dust, and rain. The UIA-YOLOv5 adaptively enhances the input image to make image content visually clear and then feeds the enhanced image to the YOLOv5 for object detection. Sufficient extreme images are synthesized via the neural style transfer (NST) and mixed with conventional data for model training to reduce domain shift. An extreme construction dataset (ExtCon) containing 506 images labeled with 13 objects is constructed for real-world evaluation. Results show that the UIA-YOLOv5 keeps the same performance as the YOLOv5 on conventional data but is more robust to extreme data with an 8.21% mAP05 improvement. © 2024 Published by Elsevier B.V.

Research Area(s)

  • Construction industry, Robust object detection, Extreme conditions, Image adaptation, Neural style transfer, Extreme construction dataset