line, 1987] is
ching cubes is
t of a signed
ned distance
nts and their
ce, which we
ghted average
neighbouring
s known, then
;ction of the
Each of these
gulation. The
xel are found
pecified mesh
this required
ote that when
aithfulness of
hat is not the
ive to the true
created mesh
) increase the
However, the
(ater than the
n of all voxel
usually over-
n usually be
formats from
h models. In
od is an order
Ve are able to
owever, large
ta can not be
nore 3D data
MAPPING
managing the
ly be viewed
n the model.
hieve realism
er, geometric
ve focused on
ler computer-
las shifted to
he geometric
:s (IPT). High
| photometric
metric model
1 is known in
n our system,
each intensity
cedure. Given
polygon, the
Xensity image
e area defined
and warped to
————— M
fit into its counterpart 3D polygon. For reviews of the various
texture mapping techniques, see Haeberli and Segal, 1993,
Lansdale, 1991 and Weinhaus and Devarjan, 1997.
In principle, the following algorithm could be used for texture
mapping:
For each 3D triangle t:
1. select one image i from the set of images taken from the
scene in which triangle ¢ appears,
2. using exterior orientation, determine the correspondence
between 3D triangle vertex coordinates in space and 2D
coordinates in image i,
3. specify 3D and texture coordinates in some modeling
language such as VRML, and
4. view the scene using a standard viewer.
However, due to the following considerations, this simple
approach is not feasible in most cases:
e The correct mapping between the plane triangle ¢ lies in
and the image plane of image i is given by a projective
transform. Since viewers do not use this transform,
distortion arises at triangle edges.
e When standard lenses are used for the cameras, lens
distortion parameters have to be applied, else distortions
will be visible at common edges of adjacent triangles
mapped from different images.
e Usually, it is desirable to have a constant texel-size on the
object. This results in a more uniform appearance and also
makes it possible to control file size and rendering speed
more precisely.
visible at
error source triangle | type technique used
edges
wrong mapping | all Geome | warping according to
(viewer) tric collinearity equations
lens distortion | mapped | Geome | application of
from tric additional parameters
different
images
radiometric mapped | Radio | global gray-value
differences from metric | adaptation, blending
between different
cameras images
non-uniform mapped | Radio | local gray-value
radiometry from metric | adaptation, blending
across single different
camera images | images
large deviations | mapped | Geome | local triangle re-
of triangle mesh | from tric assignment, blending
from true different
surface images
Table 1. Error sources for visual discontinuities in mapped
scenes and techniques used to to minimize their visual impact.
Thus, it is obvious that image warping has to be done
independently of what the viewer does to render the scene. Even
when correct modeling of exterior, interior and additional
camera parameters is used, however, there are still problems in
practice that may lead to geometric and radiometric
discontinuities which can easily disturb the impression of
looking at a “real” scene. For example, radiometric differences
between the cameras lead to radiometric differences along
triangle edges; too large deviations of the underlying triangle
mesh from the true object surface give rise to geometric errors
(e.g. parts of the object’s surface appear in more than one
335
triangle texture). Table 1 summarizes possible error sources and
the techniques we adopted to minimize their visual impact. We
address each of these problems in the following sections.
6.1 Proper Geometric Fit
As discussed above, image warping has to be done
independently of the transformation applied by the viewer. To
that end, the employed method defines a local texel coordinate
system for each 3D triangle. The texel size (in object
coordinates) can be set to the desired resolution. Each texel is
then computed using exterior and interior orientation, including
lens distortion parameters obtained from camera calibration. As
seen in figure 5, there is a clearly discernible difference between
triangles mapped with and without distortion parameters.
(a) no distortion parameters ^ (b) with distortion parameters
Figure 5: Ensuring geometric fit by using distortion parameters
6.2 Radiometric Differences
Usually, radiometric discontinuities result along common edges
of adjacent triangles mapped from different images (see e.g.
figure 7(a)). The main reasons for this are
1. radiometric differences between cameras,
2. non-uniform response of each camera across the image
plane, and
3. different sensed brightness due to different camera
positions (i.e. different orientation relative to surface
normal vector).
(1) can result from different aperture settings; however, since in
our case video cameras with automatic gain control are used,
the radiometric differences have to be modeled on a per-image
basis rather than per camera. We address this problem by a
method termed "global gray-value adaptation". (2) is most often
caused by a brightness decrease from the image center to image
borders. Both (2) and (3) can be tackled by a radiometric
correction on a per-triangle basis (termed "local gray-value
adaptation" in the following).
The global gray-value adaptation estimates gray-value offsets
between images. The gray-value differences along the border of
adjacent regions (triangle sets) are minimized by least-squares
adjustment (figure 6).
Figure 6: Global gray-value adaptation. Left: regions and
borders formed by triangles mapped from the same image.
Right: corresponding observations dj; and unknowns h;.