CIP A 2003 XIX th International Symposium, 30 September - 04 October, 2003, Antalya, Turkey
i, i + 1 and the voxel’s photoconsistency value is calcu
lated using texture similarity measures among the projec
tion regions on the images i — 1, i, i -f 1. Then, the voxel
with maximum photoconsistency value is found and all the
voxels existing between this voxel and the camera center
Ci loose votes in an increasing order as they become closer
to Ci. This process is repeated for all the rays which can be
generated from the view i. When the process is performed
for all the views, excess voxels caused by the silhouette
based reconstruction lose most of their initial photocon
sistency votes. Then by thresholding, this excess volume
is carved. Figure 3 explains better the idea. In this figure,
darkest colored voxels get the highest voting; i.e. they have
the maximum texture similarity according to the algorithm.
The color of voxel shows its vote.
Figure 3: Darkest colored voxels get the highest voting.
3 APPEARANCE RECONSTRUCTION
There exist several studies for appearance reconstruction
of 3D models from real images (Niem and Broszio, 1995,
Gen§ and Atalay, 1999, Lensch et al., 2000, Neugebauer
and Klein, 1999). In most of these studies, the model is
represented as a triangular wireframe, and each triangle is
associated with one of the images for texture extraction.
The method causes discontinuities on the triangle bound
aries as shown in Figure 4, since adjacent triangles can
be associated with different source images. Applying low
pass filter on the boundaries cannot come up with a global
solution.
In this study, 2D texture mapping is used but to reduce the
drawbacks due to the lack of third dimension information,
the concept of surface particles is adapted (Schmitt and
Yemez, 1999, Szelisky and Tonnesen, 1992). An abstrac
tion is done on the actual representation of the model: the
model is considered to be a surface composed of particles
with three attributes: position, normal and color. While
reconstructing the appearance of the model, instead of as
sociating triangles to images, particles are associated with
images for texture extraction as shown in Figure 5. Each
particle’s color is extracted from the images independently.
This is what makes the proposed method superior to the
Figure 4: (a) Discontinuities on the triangle boundaries;
(b) reconstruction using particles.
others: since a triangle is not necessarily textured from a
single image, there are not discontinuities on the triangle
boundaries due to the fact of being textured from different
images. Each particle on the surface is associated with a
pixel on the texture map, and the color information of the
texture map is recovered by this means.
A particle is not necessarily visible in all of the images in
the sequence, it can be occluded or a back face in some
of them. So, the extraction takes place at two steps: vis
ibility check and color retrieval. Visibility check is per
formed using the particle normal and the particle position
by simple hidden surface removal and occlusion detection
algorithms. The particle is projected on the source im
ages in which it is visible, and a set of candidate color
values, C = {c 0 ,..cm-i} are collected. In this study,
the candidate color values are fused in order to produce
the most photoconsistent appearance. Before assigning a
color value to a particle, it is decided whether the informa
tion extracted from the source images is photoconsistent
or not. The photoconsistency is defined in Definition 1.
The value of 0 is empirical. It is expected that the val
ues of a photoconsistent set concentrates around the view-
independent color of the particle. This method is very suit
able for removing illumination artifacts as shown in Fig
ure 4-b. However, if the geometry of the object is not con
structed precisely, the photoconsistency criteria will fail for
most of the particles, which will cause irregularities on the
surface appearance.