CIP A 2003 XIX"' International Symposium, 30 September - 04 October, 2003, Antalya, Turkey
In this study, an inexpensive and robust 3D model acqui
sition system is described. The system is more suitable
for acquiring the 3D models of small artifacts such as such
as cups, trinkets, jugs and statues. Image acquisition sys
tem consists of a turn table, a digital camera and a com
puter as shown in Figure 1. The system works as follows.
The object to be modeled is placed on a turn table. By
rotating the table, images of the object are acquired. This
image sequence is then calibrated. Object silhouettes are
segmented from the background. Each silhouette is back
projected to an open cone volume in the 3D space. By in
tersecting these open volumes, a coarse 3D model volume
of the object is obtained. This coarse volume is further
carved in order to get rid of excess volume on the concave
parts. The surface appearance for model is also recovered
from the acquired images. The model is considered as a
surface composed of particles. Color of each particle is re
covered from the images with an algorithm that computes
the photo consistent color for each particle. The resultant
appearance is stored in a texture map, while the shape is
stored in a triangular mesh. Overall system diagram is il
lustrated in Figure 2.
Figure 2: Overall system diagram.
The main advantage of proposed approach is its low com
putational complexity. Most of the methods in the liter
ature require a large amount of time and labor for recon
struction. However, the complexities of our algorithms are
reasonable. This enables building real time graphical user
interfaces on top of these algorithms, by which real time
interactive modifications can be done on the reconstructed
models. This approach is very suitable for generating 3D
models of small artifacts with high resolution geometry
and surface appearance. Such artifacts may have handles,
holes, concavities, cracks, etc. The proposed approach en
ables robustly modeling of these properties also. Further
more, we are currently developing algorithms for 3D mod
eling from auto-calibrated images. The models obtained
by this method are stored in Virtual Reality Modeling Lan
guage (VRML) format which enables the transmission and
publishing of the models easier.
The organization of the paper is as follows: we first de
scribe our camera calibration and geometry reconstruction
processes in the following section. Section 3 gives the de
tailed description of our appearance recovery algorithms.
Results obtained in the framework of our study are given
in Section 4, and the paper concludes with Section 5.
2 CAMERA CALIBRATION AND GEOMETRY RE
CONSTRUCTION
In order to compute the parameters of the camera, we use
a multi-image calibration approach (Miilayim and Atalay,
2001). Our acquisition setup is made up of a rotary table
with a fixed camera as shown in Figure 1. The rotation
axis and distance from the camera center to this rotation
axis remain the same during the turns of the table. Based
on this idea, we have developed a vision based geometrical
calibration algorithm for the rotary table (Miilayim et al.,
1999). Furthermore, we can compute very easily the dis
tance between the rotation axis of the table with respect to
the camera center which in fact facilitates the calculation
of the bounding cube (Miilayim et ah, 2000).
Once the bounding volume is obtained, carving this vol
ume by making use of the silhouettes, a coarse model of
the object is computed. This volume has some extra vox
els which in fact should not exist. In this context, we have
implemented a stereo correction algorithm which removes
these extra voxels using photoconsistency (Miilayim and
Atalay, 2001). Algorithm 1 which is mostly inspired from
Matsumoto et. ah (Matsumoto et ah, 1999) outlines the
process.
Algorithm 1 Computing the photoconsistent voxels,
reset all photoconsistency values of the voxels in V ob j ect
to max photoconsistency value
for all image i in the image sequence do
for all visible voxels in image i do
produce a ray from camera optic center
find max photoconsistent voxel on the ray
for all voxels between the max photoconsistent
voxel and camera optic center do
reduce voxel photoconsistency votes
end for
end for
end for
for all voxel v in voxel space V ob j ec t do
if the photoconsisency of v is less than a threshold
then
remove v from V ob j ec t
end if
end for
In the algorithm, each voxel in the object voxel space V ob j ec t,
starts with a high photoconsistency vote value; that is each
voxel on the model generated by the silhouette based re
construction is assumed to be on the real object surface.
Each view i is then processed in the following manner. For
each view i, rays from the camera center Ci through the
voxels seen from that view i are traversed voxel by voxel.
Each voxel on the ray is projected onto the images i — 1,