International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004
Un
Compute radiance images of all texture-images from a
calibrated response curve.
6. Re-sample pixels within each triangles based on projective
transformation between triangle plane and image planc.
7. Apply filtering and radiometric corrections to the textures.
8. Create MIP maps; a sequence of textures each of which is
progressively lower resolution (by a factor of 2) of the
original image. This is used to solve aliasing problems and
also for texture size management.
9. Create bump maps on smooth surfaces, like walls, by
introducing small variations in surface normals thus adding
roughness and more realistic look to surfaces when lighted.
10. Rendering with efficient rendering software.
Figure 3: Textured 3D model.
Figure 3 shows the final textured model of the frescos. Here a
brief overview of the different kind of approaches adopted for
texture mapping and texture correction is reported. More
details can be found in [El-Hakim et al, 2003].
Selecting most appropriate image to texture: With sufficient
image overlap, the texture of a triangle can be obtained from a
number of different images. As a rule, the image that has the
largest texture should be selected, however in order to avoid
multiple assignments of many different images to adjacent
triangles, local re-assignment of triangle patches to images was
employed. In this way the number of triangle edges where
adjacent triangles are mapped from different images is reduced.
Texture perspective: The employed method defines a local
Texel coordinate system for each 3D triangle. Projective
transformation between the plane of the triangle and image
plane is determined and used to resample the texture within
cach triangle. This is followed by low pass filter to remove
high frequency noise introduced by this re-sampling. The
geometric accuracy of projective transformation was ensured
by proper camera calibration, image registration and bundle
adjustment, which were carried out by TexCapture software.
Radiometric distortions: Radiometric distortions result along
common edges of adjacent triangles mapped from different
images. Though corrections to this problem are still
experimental, some solutions were adopted for this project.
Firstly, the response function of the digital camera was
determined. In this way the non-linear mapping between the
digitised brightness value for a pixel and the scene radiance
could be estimated. Such a knowledge allows to merge images
taken at different exposure setting, different angles, or even
different imaging devices. Secondly, color corrections between
adjacent textures obtained from different images are applied to
minimized the differences.
Rendering: in order to achieve the desired performance, the
software for interactive visualisation has been developed at
VIT [Paquet and Peters, 2002] using scene graph tools. Scene
Graphs are data structures used to hierarchically organize and
manage the contents of spatially oriented scene data. They
offer a high-level alternative to low-level graphics rendering
APIs such as OpenGL.
5. LASER SCANNER-BASED 3D MODELING
Beside photogrammetric surveys, range data of the interior of
the room were acquired with the Riegl LMS-Z360 laser
scanning system. Given the relative small size of the room and
wide field of view of the laser scanner, only 6 scans were
sufficient to survey the whole volume: 4 for the walls and 2 for
the ceiling. Setting a high scan resolution value and by placing
the scanner in the middle of the room a spatial resolution
(displacement on XY planc) of 5 mm on average was obtained.
Though the acquiring software, provided in bundle with the
laser scanner, offers a tool for the scan registration, both this
procedure and all subsequently ones needed for the 3D
modeling were performed using Polyworks Modeler of
[Innovmetric, 2004]. Such a software provides a very powerful
environment for the interactive modeling of real objects, whose
geometry has been acquired in terms of very dense point
clouds. It 1s composed of several modules, which allow the
user to carry out all the modeling steps, keep the control over
the entire process and verify the accuracy of the results through
a number of dedicated tools. In this stage following modules
has been employed: |
— ImAlign for the scan alignment,
ImMerge for the mesh generation
— Iminspect for the model georeferencing
5.1 Range Data Alignment:
The interactive manual N-points alignment procedure was
adopted in this case to register the wall and the ceiling scans
each other. The acquisition of intensity data greatly helped the
registration step, as it made easy to recognize matching points
on the adjacent scans, as shown in figure 4.
As a result of this first processing step, an approximate
transformation matrix for each scan pair was obtained and then
used in the second stage as starting point for the refined
alignment based on the well-known ICP algorithm. In both
steps a scan group was locked in order to define the reference
frame of the model. Then, a global ICP-based alignment
algorithm was run in order to refine the results of the first
stage. Such approach [see Soucy et al, 1996] yielded a very
good registration, with a mean convergence value of 6.3 x 10°
*. much lower than the preset threshold (10?), and an average
RMS alignment error of 0.006 m (figure 5). This result should
confirm the goodness of the registration procedure
implemented in Polyworks: the residual error is due to the
inherent accuracy of the laser scanner. Finally, through
ImMerge, the scans 'were triangulated, in order to model the
room surface in terms of a unique mesh, which is the most
suited representation for the model texturing (figure 6).
International
Jc CITUR
EXT
EN
From
-—
^ LY! fa La f^
Lu
Figure |