-80
design a volume based Shape from Structured Light algo
rithm or a surface based Shape from Silhouette algorithm.
With the former method both underlying algorithms would
build their "native" model of the object. However, conversion
and intersection of the models would not be a simple task.
While conversion of the Shape from Silhouette's volumetric
model to a surface model is straightforward — one only has to
find 3D points of the volume belonging to the surface — an
intersection of two surface models can be rather complex. One
could start from the points obtained by Shape from Structured
Light (because they really lie on the object's surface, whereas
points on the surface of the volume obtained by Shape from
Silhouette only lie somewhere on the object's visual hull) and
fill up the missing surface points with points from the Shape
from Silhouette model.
There are several problems with this approach. There could be
many "jumps" on the object surface, because the points taken
from the Shape from Silhouette model might be relatively far
away from the actual surface. The approach would also not be
very efficient, because we would need to build a complete
volumetric model through Shape from Silhouette, then intersect
it with every laser plane used for Shape from Structured Light
in order to create a surface model, and then, if we also want to
compute the volume of the object, we would have to convert the
final surface model back to the volumetric model.
Another possibility would be converting the surface model
obtained by Shape from Structured Light to a volumetric model
and intersect it with the Shape from Silhouette's model. In this
case the intersection is the easier part - for each voxel of the
space observed one would only have to look up whether both
models "agree" that the voxel belongs to the object - only such
voxels would be kept in the final model and all others defined
as background. Also the volume computation is simple in this
case - it is a multiplication of the number of voxels in the final
model with the volume of a single voxel. But the problem with
this approach is the conversion of the Shape from Structured
Light's surface model to a volumetric model - in most cases, the
surface model obtained using laser plane is very incomplete
(see the model of an amphora in Figure 9(b) because of the light
and camera occlusions (Figure 10), so one would have to decide
how to handle the missing parts of the surface.
And generally, the conversion of a surface model to a volu
metric model is a complex task, because if the surface is not
completely closed, it is hard to say whether a certain voxel lies
inside or outside the object. With closed surfaces one could
follow a line in 3D space starting from the voxel observed and
going in any direction and count how many times the line
intersects the surface. For an odd number of intersections one
can say that the voxel belongs to the object. But even in this
case there would be many special cases to handle, e.g. when the
chosen line is tangential to the object's surface.
This reasoning lead us to the following conclusions:
• Building a separate Shape from Structured Light surface
model and a Shape from Silhouette volumetric model
followed by converting one model to the other and inter
secting them is mathematically complex and compu
tationally costly.
• If we want to estimate the volume of an object using our
model, any intermediate surface models should be avoided
because of the problems of conversion to a volumetric
model.
When building a 3D volumetric model of an object based on a
number of its 2D images, there are two possibilities regarding
the decision whether a certain voxel is a part of the object or
belongs to the background. Therefore, our approach proposes
building a single volumetric model from the ground up, using
both underlying methods in each step (illustrated in Figure 11):
1. Binarize the acquired images for both Shape from
Silhouette and Shape from Structured Light in such a way
that the white image pixels possibly belong to the object
and the black pixels for sure belong to the background.
This is shown in Figure 1 la.
2. Build the initial octree, containing one single root node
marked "black". (Figure 1 lb). This node is said to be at the
level 0. Set the current level to 0.
3. All black nodes of the current level are assumed to
be in a linked list. Set the current node to the first
node in the list. If there are no nodes in the current
level, the final model has been build so jump to Step 8.
Otherwise, continue with Step 4.
4. Project the current node current level into all
Shape from Silhouette binarized input images and intersect
it with the image silhouettes of the object (by simply
counting the percentage of white pixels within the projec
tion of the node). As the result of the intersection the node
can remain "black" (if it lies within the object) or be set to
"white" (it lies outside the object) or "grey" (it lies partly
within and partly outside the object). This is illustrated in
Figure 11c. If at least one image says "this node is white",
it is set to white. Otherwise, if at least one image says "this
node is grey", it is set to grey and only if all images agree
that the node is black, it stays black.
5. If the current node after Step 4 is not white, it is
projected into two binarized Shape from Structured Light
images representing two nearest laser planes to the node -
one plane is the nearest among the images acquired with
the turntable's rotation angle between 0° and 180° and the
other the nearest among images taken with the angle
between 180° and 360°. The separation into 2 intervals is
done because if we use a single nearest plane, it could
happen that the projection of the node lies completely in
the occluded part of the image. Two nearest planes defined
this way are almost identical, because they both contain the
rotational axis of the turntable (because of the way we set
the laser plane, see Figure 2, so if the nearest plane in the
range 0° - 180° was with the angle a, then the nearest
plane in the range 180° - 360° will be with the angle a +