ly in he-
itionally,
x surface
y but also
) meshes.
ved from
software
'ombined
pack-pro-
the local
ically ex-
icate that
hile com-
onstrated
tegies, as
1g results
lis course
; also in-
D models
er far end
ing deve-
xtraction
any prior
s of high
obtained
ments for
he metric
se recon-
1e images
ssed. For
ds, based
sequently
presented
ugh laser
sampling
s sense, it
tification
holds true
ld scenes,
al quality
al, 2002).
possibili-
tance, sti-
nning and
jercial 3D
neverthe-
less, the limited image quality usually makes the acquisition of
separate images with higher resolution necessary. Clearly, this
is equally true for the high requirements of orthoimaging.
However, the laser scanning approach is faced with several pro-
blems. Leaving high cost aside, a major question is that of post-
processing vast volumes of data for mesh triangulation (includ-
ine noise removal and hole-filling), which is a very demanding
procedure indeed (Bóhler et al., 2003). Yet, the next problem is
that commercial orthoprojection software handles only surfaces
described as a DTM with a unique elevation at each planimetric
XY location. Thus, all scanned 3D points are typically processed
by 2D triangulation into a 2.5D mesh — with predictable conse-
quences on the final orthoimage (Mavromati et al., 2003, give
such an example). Unless one finds suitable projections yielding
single-valued ‘depth’ functions for particular surface types (as
done by Knyaz & Zheltov, 2000), orthoimaging algorithms for
fully 3D models must necessarily be introduced.
The main task of such algorithms is to avoid the common flaws
of orthoimaging (displacement, blind areas, double-projection)
by handling the problem of visibility, which is twofold. On the
one hand, every individual surface unit (groundel) which is visi-
ble in the direction of the orthoimage should be established. In
this way, to each orthoimage pixel a unique elevation is assign-
ed. Next is to check whether these surface points are in fact vi-
sible from the perspective centre of the original image, too. In
case of occlusion, a colour value can be extracted from an adja-
cent image. Such approaches, based on dense regular DTMs de-
rived from laser scanning, have been implemented in aerial and
close-range projects (Kim et al., 2000; Boccardo et al., 2001).
Following the usual photogrammetric practice, in the above and
other cases the texture for each visible surface unit is extracted
from a corresponding single original image. Provided that over-
lapping images exist, the source image can be selected accord-
ing to different possible criteria, for instance: imaging distance,
angle formed by the projective ray and the surface; size of the
imaged surface triangle. Evidently, this ‘single-image’ texturing
approach can lead to adjacent surface triangles receiving colour
from different images with varying radiometric characteristics.
The consequences on triangle borders can be radiometric distor-
tion and discontinuity artifacts (El-Hakim et al., 2003). Alterna-
tive elaborate responses to this, in the area of computer graphics
and computer vision, rely on colour interpolation, or ‘blending’.
For every point, appropriately weighted combinations of corres-
ponding triangle textures from all available images — or from a
suitably selected image subset — on which this point appears are
used (Neugebauer & Klein, 1999; Bernardini et al., 2001; Bueh-
ler et al., 2001; Wang et al., 2001; Rocchini et al., 2002). In this
way of smoothing radiometric difference, seamless texture with
no jumps in colour appearance can be obtained — at the possible
cost of a certain blurring effect (El-Hakim et al., 2003).
The approaches referred to above have been developed in the
field of computer graphics, where life-like animations, realism
or illumination are evidently important. A weighting strategy is
thus formulated mostly in the context of view-dependent texture
mapping, where interpolation schemes favour images observing
the object or scene closest in angle to the current viewing direc-
tion. In this way, surface specularities and incorrect model geo-
metry may be better captured (Debevec et al., 1996, 1998).
However, it has been pointed out that using a single texture map
in 3D models is usually sufficient (Wang et al., 2001). In this
sense — though one obviously has much to benefit from research
in this field — it appears that static rather than dynamic texturing
is preferable for most instances of photogrammetric mapping. A
view-independent algorithm weights the contribution of partici-
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV. Part B5. Istanbul 2004
pating original images according to their spatial relation to the
model — e. g. distance, angle of view — and their characteristics
— camera constant and resolution — in order to assign a unique
colour value to each surface unit (see, for instance, Poulin et al.,
1998; Grün et al., 2001).
Although colour blending may be regarded, to some extent, also
as an ‘error filtering’ process, existing error sources may cause
geometric and radiometric distortions. Obviously, the final pro-
duct is significantly affected by the accuracy, with which image
orientations — relative to each other as well as in object space —
have been recovered. This holds also true for camera calibration
parameters. Consequently, a self-calibrating bundle adjustment,
including lens distortion, is indispensable. Further obvious error
sources causing misalignments include accuracy of 3D record-
ing, quality of surface description by 3D faces and model regi-
stration. Finally, though such problems are rather uncommon in
photogrammetric applications, significant differences in resolu-
tion of the source images, which can blur texture, are also to be
considered (Neugebauer & Klein, 1999; Buehler et al., 2001).
Here, an approach is presented for the automated generation of
orthoimages from a 3D mesh, derived from laser scanning. The
implemented algorithm identifies all surface triangles which are
seen in the viewing direction and then establishes whether these
appear or not on every available image. Each orthoimage pixel
is coloured through weighted blending of texture from all view-
ing images, whereby outlying colour data are automatically ex-
cluded. Results of experimental applications are also given.
2. PROJECTION AND TEXTURING ALGORITHM
For this procedure, the following input data are required:
e a triangulated 3D mesh in the form of successive XYZ triplets
describing the object surface;
e grayscale or colour images along with their interior and exte-
rior orientation parameters;
e the equation in space of the projection plane;
e the endpoints in object space, if necessary, of the area to be
projected;
e the pixel size of the new digital image.
It is seen that, besides orthogonal, oblique projections may also
be accommodated.
2.1 Model visibility and occlusion
In the first step, the triangulated 3D mesh is projected orthogo-
nally onto the specified plane of projection. In order to speed up
the search process, the area of the orthoimage is tessellated into
a rectangular grid, whose cell is larger than the one of the ortho-
image, e.g. by 5 times (its size depends on factors such as the
available computer memory, the model size and that of the new
image). For each 2D triangle, the circumscribing orthogonal pa-
rallelogram is formed. This occupies a number of adjacent grid
cells, to which the identity number (ID) of the particular triangle
is assigned.
This procedure is repeated for all triangles, resulting into a table
containing all triangle IDs ascribed to each individual grid cell.
In this way, all projected triangles actually containing a particu-
lar pixel of the orthoimage may be established by checking only
a limited number of triangles (namely, those ascribed to the cor-
responding grid cell). Among these model triangles intersected
in space by the projecting line of a particular orthoimage pixel,
the one whose intersection yields the largest elevation value is
selected; the elevation value, which provides the Z-value of the
orthoimage pixel, and the triangle ID number are stored. In this
mode, the model visibility/occlusion question has been handled.