In: Paparoditis N., Pierrot-Deseilligny M„ Mallet C.. Tournaire O. (Eds). 1APRS. Vol. XXXVIII. Part ЗА - Saint-Mandé, France, September 1-3, 2010
data are used to estimate an approximated camera position. In
the second step 2D points in the image are extracted and
matched with 3D points of the model using Hough transform
and generalized M-estimator. Then the Lowe’s (1987)
algorithm is applied for refinement of the camera parameters.
This approach yields very good results for in downtown area;
however, it fails at residential region because of not enough
extracted vertical edges.
In texture mapping using thermal images the properties specific
to the IR spectrum should be taken into consideration. First of
all, IR images have lower contrast and lower resolution than
images in visible spectrum. Consequently, matching with 3D
building model based on edge matching (Frueh et al., 2004) or
on vertical vanishing points (Ding & Zakhor, 2008) could be
difficult. Stilla et al. (2000) proposed a method for matching of
low resolution IR images based on intersection points of roof
edges.
For texture mapping a visibility analysis is necessary. Generally,
there are two groups of methods for checking of the visibility:
(i) variations of depth-buffer (depth image) approach (Frueh et
al., 2004; Hoegner & Stilla, 2007; Karras et al., 2007) and (ii)
polygon-based hidden area detection (Kuzmin et al., 2004). In
the polygon-based method proposed by Kuzmin et al. (2004) all
polygons are projected onto image plane and intersected. This
procedure is appropriate for nadir view images, because of
small number of intersections. However, using oblique view
this method would be very time consuming and could cause
many small polygons.
The depth-buffer method is a basic method removing hidden
surfaces adopted from computer graphics. The depth-buffer is a
matrix storing for every pixel the distance from projection
centre to the model surface. This method was often proposed in
some variations. Karras et al. (2007) tries to generalize the
problem of orthorectification and texture mapping. He proposes
a method for visibility checking based on depth image. Every
triangulated 3D mesh is projected onto projection plane and for
every triangle occupied pixels get identity number (ID) of the
triangle. For pixels with more IDs the closest one is chosen.
Frueh et al. (2004) used a modified depth-buffer storing
additionally the product of a triangle’s normal vector with the
camera viewing direction at each pixel. Using information about
vector product not occluded edges can be detected. Abdelhafiz
& Niemeier (2009) integrate digital images and laser scanning
point clouds. They use a Multi Layer 3GImage algorithm which
classifies the visibility on two stages: point stage and surface
stage. The visible layer and back layers are applied. Occluded
vertexes are sent to a back layer, while visible vertexes appear
on the visible layer. An image is used for texture mapping of a
mesh, if all three vertexes of it are visible in this image.
Abdelhafiz & Niemeier discuss also the problem of extrinsic
(un-modelled) occlusions caused by such objects as traffic
signs, trees and street-lamps. They propose a Photo Occlusion
Finder algorithm which checks textures from many images for
one mesh. When the textures of one mesh are not similar an
occlusion occurred.
Objects taken by image sequences with a high frame rate from a
flying platform appear in multiple frames. In this case textures
with optimal quality have to be taken for texturing. Lorenz &
Doellner (2006) introduced a local effective resolution and
discuss it on example of images from a High Resolution Stereo
Camera (HRSC) due to its special projection of line scanners
(perspective and parallel). Frueh et al. (2004) uses a focal plane
array. He determines optimal textures taking into account
occlusion, image resolution, surface normal orientation and
coherence with neighbouring triangles. He proposes to accept
textures with few occluded pixels instead textures with very low
resolution taken from extremely oblique view. This quality
calculation is focused on texturing with optical images and
good user perception.
In this paper we propose a texture selection method for thermal
inspection of buildings using a weighted quality function. This
approach allows reducing or increasing the influence of
occlusions, distance and viewing directions on the texture
quality. In Chapter 2 a necessary for texturing camera
calibration, positioning and orientation is described. In chapter
3 a concept for texture mapping is introduced. Moreover, the
influence of oblique view imagery on texture resolution is
discussed. The equation for weighted quality measure is
presented. Finally, in Chapter 4 experiments with some
exemplary textures are shown and discussed in chapter 5.
2. SYSTEM CALIBRATION
In most cases, how already mentioned, GPS/INS data do not
refer to the projection centre. Consequently, boresight and
leverarm parameters are required. In addition, camera
parameters, such as focal length, principle point, and
distortions, need to be determined. As the solution we propose a
system calibration using an extended bundle adjustment with
camera self calibration which is described by Kolecki et al.
(2010). In this method in few images of the sequence ground
control points (GCP) need to be measured and all parameters of
exterior and interior orientation as well as boresight and
leverarm corrections should be estimated. Parameters obtained
in the adjustment should be applied for projection onto all
images of the sequence.
3. A CONCEPT FOR TEXTURE MAPPING
The region within an IR frame corresponding to a face of the
3D model can be determined by projection of the polygons into
the image. In datasets captured by moving cameras with high
frame rate most polygons of the model appear many times in the
images with different aspect angles. This advantage allows
choosing the texture captured from the best pose. Additionally
in some cases and bridge the problem of occlusions can be
resolved. The quality of the textures extracted from different
frames belonging to the same plane varies depending on
viewing direction, distance to the camera, and partial
occlusions. For selecting the best texture a quality measure has
to be defined and the selection procedure has to be
implemented. A flowchart of this procedure is depicted in Fig.
1.
Starting from the first frame for each face a projection is carried
out. If the face lies within the frame a partial occlusion Oq
(Chapter 3.1) and quality measure (Chapter 3.3) are
calculated. In case that the quality q tj of is higher than the
quality of the currently stored texture, new texture ty is created
and the current texture is overwrite with ty.
3.1 Occlusions
Every face is projected into the image and pixels occupied by
this plane get the ID of this plane and its distance from the
projection centre. A pixel is considered as occupied if its centre