This project aims to reconstruct a 3D object model from an image sequence captured
by a mobile camera. Our system can be used to generate 3D model of any type of
object. We are particularly interested in demonstrating the capability of the system in
the reconstruction of 3D head model. An efficient method has been developed that
can facilitate the reconstruction of a complex surface model. Conventional 3D
reconstruction techniques, with the use of specialized equipment, are inflexible and
very expensive. To reduce the cost and increase the flexibility, our system only
requires consumer-type digital camera for image sequence acquisition. A new
colored calibration pattern is designed to allow the object and the calibration pattern to
be captured simultaneously. It can provide a higher convenience but lower cost for
the set up of the system.
The whole process of 3D object reconstruction consists of four major steps:
camera calibration, volumetric model reconstruction, surface model reconstruction and
texture mapping. Camera calibration is an important step to determine the
relationship between 3D world coordinates and the corresponding 2D image
coordinates. The volumetric model is reconstructed from the image sequence
(multiple views) of the object by Shape-from-Silhouette/Photo-consistency. The
volumetric model in the real world space is converted to surface model. Finally, a
single texture map is created from the original multiple camera views of the object.
The linear camera calibration method can be done in high speed and with high
accuracy. The camera can be calibrated either by using the coplanar calibration
pattern or the non-coplanar calibration pattern with on-line lens distortions
compensation. Both radial and tangential lens distortion compensation can improve
the linear camera calibration.
For the 3D model reconstruction, a novel reconstruction algorithm "Shape-from-
Silhouette/Photo-consistency" is implemented. This algorithm combines the voting-localizing
operations of the Shape-from-Silhouette in a novel space, and the Photo-consistency
constraint among neighboring views of the object. It overcomes the
shortcomings of each algorithm. A 2D voxel mask in 3D space is proposed that can
effectively locate the concavity of the object surface. The volumetric model is then
converted to the surface model by the marching cubes algorithm.
To give the object model a realistic appearance, two texture mapping methods are
developed. One is a view-independent method and the other is a view-dependent
method. The view-independent method is to combine the individual photographs
together. The view-dependent texture mapping is to combine and blend different
input photographs to form a single texture.
The results of camera calibration and volumetric modeling are shown. Some
reconstructed 3D photorealistic models are presented to demonstrate the performance
of the system.
| Date of Award | 4 Oct 2004 |
|---|
| Original language | English |
|---|
| Awarding Institution | - City University of Hong Kong
|
|---|
| Supervisor | Kwok Leung CHAN (Supervisor) |
|---|
- Image processing
- Image reconstruction
- Three-dimensional imaging
- Digital techniques
3D object model reconstruction from multiple views
WONG, S. S. (Author). 4 Oct 2004
Student thesis: Master's Thesis