mine which points are inside and which outside the hexagon,
and there can be hundreds of thousands of octree nodes that
need to be processed. It is much simpler (and therefore faster)
to compare the bounding box of the eight points. Figure6 shows
a projected octree node and the corresponding bounding box.
x min X max
I I
(a) (b)
Figure 6. Projection of a node (a) and its bounding box (b)
The bounding box is tested for intersection with the object's
silhouette in the current input (binary) image. All image pixels
within the bounding box are checked for their color, whether
they are black or white. The output of the intersection testing
procedure is percentage of the black pixels of the bounding box,
i.e., the percentage of pixels belonging to the object. If this
percentage is equal or higher than a user definable threshold for
black nodes, the node is marked as black. If the percentage is
smaller than or equal with a user definable threshold for white
nodes, the node is marked as white. Otherwise, the node is
marked as gray and it is divided into eight child nodes repre
senting eight sub-cubes of finer resolution.
The calculated image coordinates of the cube's vertices can lie
between two image pixels, and a pixel is the smallest testable
unit for intersection testing. Which pixels are considered to be
"within" the bounding box? Figure 7 illustrates our answer to
this question. We decided to test only pixels that lie completely
within the bounding box (Figure 7a), because that way the num
ber of pixels that need to be tested is smaller than testing all
pixels that are at least partly covered by the bounding box The
pixels at the border of the bounding box are excluded, because
most of them do not lie within the hexagon approximated by the
bounding box. In the special case if there are no pixels that lie
completely within the bounding box (Figure 7b) the pixel
closest to the center of the bounding box is checked for the
color.
| pixel borders
□
bounding box of
projected node
□
area tested for
(a) at least o
re pixel completely
within bounding box
(b) no pixels completely
within bounding box
Figure 7: Selection of pixels for the intersection test
The octree representation has several advantages [3]: for a
typical solid object it is an efficient representation, because of a
large degree of coherence between neighboring volume ele
ments (voxels), which means that a large piece of an object can
be represented by a single octree node. Another advantage is the
ease of performing geometrical transformations on a node,
because they only need to be performed on the node's vertices.
The disadvantage of octree models is that they digitize the
space by representing it through cubes whose resolution depend
on the maximal octree depth and therefore cannot have smooth
surfaces.
4. COMBINATION OF ALGORITHMS
An input image for Shape from Silhouette defines a conic
volume in space which contains the object to be modeled
(Figure 8a). Another input image taken from a different view
defines another conic volume containing the object (Figure 8b).
Intersection of the two conic volumes narrows down the space
the object can possibly occupy (Figure 8c). With an increasing
number of views the intersection of all conic volumes approxi
mates the actual volume occupied by the object better and
better, converging to the 3D visual hull of the object. Therefore
by its nature Shape from Silhouette defines a volumetric model
of an object.
(a) (b) (c)
Figure 8. Two conic volumes and their intersection
An input image for Shape from Structured Light using laser
light defines solely the points on the surface of the object which
intersect the laser plane (Figure 9a). Using multiple views
provides us with a cloud of points belonging to the object
surface (Figure 9b), i.e. with the surface model of the object.
(a) (b)
Figure 9. Laser projection and cloud of points
The main problem that needs to be addressed in an attempt to
combine these two methods is how to adapt the two represen
tations to one another, i.e. how to build a common 3D model
representation. This can be done in several ways:
• Build the Shape from Silhouette's volumetric model and
the Shape from Structured Light's surface model indepen
dently from one another. Then, either convert the volu
metric model to a surface model and use the intersection of
the two surface models as the final representation or con
vert the surface model to a volumetric model and use the
intersection of the two volumetric models as the final
representation.
• Use a common 3D model representation from the ground
up, avoiding any model conversions. That means either
- 79 -