Enhanced three-dimensional model reconstruction based on local ternary pattern-guided fusion of multi-exposure images

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)1546-1563
Journal / PublicationIET Image Processing
Volume17
Issue number5
Online published3 Jan 2023
Publication statusPublished - 17 Apr 2023

Link(s)

Abstract

Computer vision applications usually rely on the features extracted from input images with good visibility. Image acquisition systems may produce degraded images with low contrast or distorted colours. For instance, bad weather (haze, fog) can cause images captured outdoor with low visibility. Image processing algorithms generally assume that the input image is the scene radiance. Haze removal, with the recovery of image radiance, ensures reliable features extracted from images and the image processing algorithm can achieve optimal performance. Inspired by the concept of image dehazing, the authors propose an image enhancement method that can be used to improve the visibility of the images. Each original image is first transformed into multiple exposure images by means of gamma-correction operations and adaptive histogram equalization. The transformed images are analyzed by the computation of the local ternary pattern. The image is then enhanced, with each pixel generated from the set of transformed image pixels weighted by a function of the local pattern feature. The authors evaluate their proposed method on four benchmark image dehazing datasets. The quantitative results show that our method outperforms many deterministic algorithms and deep learning models. Moreover, the authors investigate the impact of image enhancement on a practical image-based application-the reconstruction of three-dimensional (3D) model of survey scene. Accurate 3D model reconstruction depends on high-quality images. Degraded images will result in large errors in the reconstructed 3D model. Experimentations have been carried out on outdoor and indoor surveys. Our analysis finds that when fed into the photogrammetry software, the images enhanced by the authors' method can reconstruct 3D scene models with sub-millimetre mean errors, which are much better than those with the original images. As shown in the visual and quantitative results of 3D model reconstruction, the authors' proposed method also outperforms other image enhancement methods.

Research Area(s)

  • computer vision, image enhancement, image texture, THIN CLOUD REMOVAL, HAZE

Download Statistics

No data available