f real world
nages of the
| in order to
iges undergo
ybject can be
ct. The shell
ne the model
yrmal vector
e orientation
carved away
ject.
avities on an
Ve refine the
by our shell
our proposed
z and Dyer,
gorithms use
om the other
e points in a
in the input
xel coloring
he visibility
f pooling the
, we perform
ich distortion
is described.
lained. The
m silhouette
4 introduces
ster 5, image
| algorithm is
e are using à
» acquire still
> the interior
acquisition is
und. Multiple
in a circular
2.1 Camera Calibration and Image Orientation
Prior the image acquisition, the camera should be calibrated.
We calibrated the sensor using several images with a calibration
object having three perpendicular square planes and 25 control
points on each side. Since we use an off-shelf CCD camera, we
switch off the auto-focus, so that the focal length remains fixed
throughout the whole process.
In a second step, the object is placed inside the calibration
frame in order to define some natural control points accurately.
We performed a bundle block adjustment with all the images,
which delivered the interior camera parameters as well as the
coordinates of the control points, which were initially
introduced as new points.
In order to compute the object’s model, the images should be
oriented, the rotations and the position of the cameras should be
known. In many cases we cannot mark control points on the
objects, therefore natural textures can be used.
The images were adjusted in a bundle block adjustment process.
We used enough tie points in all images in the circular camera
setup to perform a bundle block adjustment, covering all
images. We achieved very accurate results for the image
orientations, using the previously calibrated camera. The image
projection centers had accuracies of 1-2 mm, the rotation were
determined with 0.05-0.1 gon.
3. APPROXIMATE MODEL
One of the well-known approaches to acquire 3D models of
objects is voxel-based visual hull reconstruction, which
recovers the shape of the objects from their contours.
A silhouette image is a binary image and easily obtained by
image segmentation algorithms. Image pixels indicate if they
represent an object point or background point. Since the blue
background that we use is sufficiently homogeneous, we can
easily define a hue domain, which is considered background. A
pixel's position in the IHS-colorspace is examined in order to
decide if it represents background or object. We performed the
image segmentation using the academic software HsbVis.
Figure 1: Intersection of silhouette cones
To start with, we define a 3D discrete space which contains
only opaque voxels with the value *255" representing object
points. In order to compute the silhouette cone, we projected all
the cube's voxels into every image. If the image coordinate
defines a background pixel; the voxel is labeled transparent by
giving it the value *0" which means the voxel of interest now
represents empty regions in the voxel cube. The volume
intersection algorithm intersects all silhouette cones from
multiple images to achieve the estimate geometry of the object,
which is called the object’s visual hull. See (Kuzu and
Rodehorst, 2000) for more details.
In Figure 1 you see the intersection of the silhouette cones
acquired, using 1, 3, 5 and 9 images. As you will notice with
increasing number of images this method obtains better
approximations to the objects true shape.
As shown in Figure 2, concavities cannot be recovered with this
method since the viewing region doesn't completely surround
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004
the object. The accuracy of the visual hull depends on the
number of the images and the complexity of the object.
E
— et
ce po d
aae
Figure 2: Concave areas in visual hull reconstruction
However, since the result encloses the largest possible volume
where the true shape lies and the implementation is
straightforward and easy, it is an attractive. method for
applications where the approximate shape is required. We use
the visual hull as the first step of our reconstruction algorithm
and we consider the shell carving algorithm as a refinement
method to carve away the necessary voxels in the concave areas
of the visual hull for a more precise reconstruction.
4. COMPUTATION OF VISIBILITY INFORMATION
It is crucial to find out which voxel is visible in which image.
We will use a line tracing algorithm to check each voxel along
the line, whether it is background or object voxel. As soon as an
opaque voxel is encountered, the initial voxel can be considered
occluded. When the line exits the defined voxel cube, it can be
stopped, assuming that the voxel is visible. Whether lying on
the backside or occluded by another voxel, the algorithm will
correctly tell if the voxel is visible or not.
Figure 3: Considering occluded areas
In Figure 3, we show why the knowledge of visibility can be
crucial. If we take a closer look at the vase, we will see that the
handle is occluding some voxels for some specific images.
Hence these images, which theoretically have the best view to
the occluded voxels, concerning the viewing angle, cannot see
the voxels and therefore should not be considered. From the set
of remaining images, the best candidate needs to be chosen.