The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008
704
Blender. At the beginning, the whole dataset with 600k
triangles was processed. With this number of triangles, the
performance of Blender was very poor. Nevertheless, a VRML
model was exported with the full number of triangles (600k).
The quality was satisfying, but real time navigation was more
or less impossible. Due to this, a reduced dataset of 100k
triangles was generated. With this dataset, the performance of
the real time visualization was acceptable. The comparison of
the visual impression, especially of the visible details, showed
that the two datasets are equivalent, because most of the
information for the human eye is provided by the texture, not
by the geometry. This leads to the fact, that for pure
visualization the geometry can be reduced down to a certain
level, as long as the full texture resolution is still used. Finally,
a virtual flight around the object was produced with the full
dataset and a resolution of 1024 x 768 pixels. Figure 9 shows
the result of the Khmer head dataset. It was generated using 12
images.
Figure 9. Final result, the textured Khmer head
5. CONCLUSIONS
The presented workflow of texture mapping contains all the
important steps, from the given orientation data and the original
images to the final textured 3D model. It includes a new
visibility analysis algorithm fully based on vector algebra. This
leads to an image-resolution independent analysis, suitable to
handle sparse and dense datasets at the same time, in a fully
automatic way without manual interaction. The following
Triangle to Image Assignment procedure selects the best texture
source for each triangle from multiple images. No averaging is
done, this preserves the high frequent texture information. The
finest details of the images are preserved. To achieve a
photorealistic and seamless textured model, three image
enhancement steps were implemented. First, the vignetting was
removed using a simple cos4 relation. A second step removed
the global brightness difference over the whole image, caused
for example by different exposure times. In a last step, a local
brightness correction was conducted to compensate influences
of different or moving light sources or spot light effects by
generating a brightness difference surface over the whole image.
This surface was interpolated using a biharmonic spline
function. Common points in two images are used as input.
The final visualization of the data was done using the open
source software system Blender, which provides the possibility
for high resolution rendering of single images as well as for the
generation of movies.
6. FURTHER WORKS
The described algorithms are working well and achieve
satisfying results. Nevertheless, some parts of the algorithms
will be improved in the future. First, the performance of the
visibility algorithm concerning processing time should be
improved. A possible solution is the splitting of the dataset in
tiles and usage of multi core CPU systems.
Concerning the Triangle to Image Assignment, a patch growing
algorithm will be implemented, to reduce the length of
borderlines between different texture sources. This will reduce
the number of potential seams.
Furthermore, the algorithm of vignetting reduction can be
enhanced in order to model more complex lens systems.
A last improvement could be the exchange of the cross
correlation function by a least squares matching procedure to
improve the point fitting for images with larger angle between
the viewing directions.
ACKNOWLEDGEMENT
The author thanks the Swiss National Science Foundation (SNF)
for the financial support to make this work possible.
REFERENCES
Akca, D., Remondino, F., Novk, D., Hanusch, T., Schrotter, G.,
and Gruen, A., 2007. Performance evaluation of a coded
structured light system for cultural heritage applications.
Videometrics IX, Proc. of SPIE-IS&T Electronic Imaging, San
Jose (California), USA, January 29-30, SPIE vol. 6491, pp.
64910V-1-12.
Amhar, F., 1998. The Generation of True Orthophotos Using a
3D Building Model in Conjunction with a Conventional DTM.
IAPRS, Vol. 32, Part 4 “GIS-Between Vision and Applications”,
pp. 16-22.
Biasion, A., Dequal, S., Lingua, A., 2004. A new Procedure for
the Automatic Production of True Orthophotos. ISPRS
Conference proceedings, Istanbul, Commission IV, pp. 538-543.
Blender, 2008. http://blender.org , (accessed 20. Jan. 2008)
d’Angelo, Pablo, March 21 st-24th 2007. Radiometric alignment
and vignetting calibration, Workshop: Camera Calibration
Methods for Computer Vision Systems - CCMVS2007,
Bielefeld University, Germany.
El-Hakim, S., Gonzo, L.,Picard, M., Girardi, S., Simoni, A.,
2003. Visualisation of Frescoed Surface: Buonconsiglio Castle
- Aquila Tower, Cycle of Months. Proceeding of International
Workshop on Visualisation and Animation of Reality-based 3D
Models, Tarasp-Vulpera, Switzerland.
Frueh, C., Sammon, R., Zakhor, A., 2004. Automated Texture
Mapping of 3D City Models with Oblique Aerial Imagery. 2nd
International Symposium on 3D Data Processing, Visualisation
and Transmission (3DPVT’04), Thessaloniki, Greece, pp 275-
282.