'Sent
'nta-
pject
im-
and
ex-
ling
1ent
e.g.
irds
ach
tem
ose
ene
2
ing
for
ved
ive
For
arp
lex
ite
ral
m
nd
ole
| iS
les
SO
Its
on
Boehm, Jan
Range Feature
Image Extraction i
ps y Matching
online d
offline Lf ,
CAD Feature
Model Extraction
Figure 1: Traditional paradigm in model-based computer vision with our modification (dashed line).
Segmentation algorithms usually are designed to fulfill some basic requirements: completeness, compactness and mini-
mum number of extracted regions. In contrast to many of the reported algorithms we do not need a complete segmentation
of the range image for our application. That is, not every measured point has to be assigned a surface label. We only
need a few reliably detectable features in the scene to perform the matching. Also we do not require the boundary of the
surfaces to be extracted correctly. When dealing with complex shaped objects, surface boundaries are often corrupted by
shadowing and self occlusion. So they do not yield reliable information for recognition. The number of detected regions
is bound by the number of surfaces present in the CAD model. It becomes clear, that segmentation will benefit strongly
from information about the type of features which are expected to be present in the scene.
We propose a model-driven approach for feature extraction based on the CAD model as shown in figure 1. This approach
has the potential to avoid gross over-segmentation and misclassification. For every point in the range dataset we compute
the fundamental curvature characteristics. The information extracted from the CAD model is then used to classify the
point accordingly. Our approach differs from the work of others in that we do not use a generic surface model, but a fixed
model specific to the object processed. This changes the traditional paradigm in model-based computer vision, because
we already use the CAD model data in the feature extraction stage. As an example of a more traditional system Newman,
Flynn and Jain (Newman et al., 1993) have developed a model driven approach, in which they classify range images into
planar, spherical, cylindrical or conical surfaces. Then they fit their generic surface model, which is a quadric surface
either a plane, cylinder, cone or sphere, to the data. Our system currently handles 15 different surface types.
Surface curvature has been a favorite criterion for segmentation among researchers in related fields for some time. In
most works only the signs of curvatures have been used to classify the patches. Paul J. Besl (Besl, 1988) has introduced
a method which uses eight different curvature classes based on the signs of mean and Gaussian curvature. We believe
that advances in sensor technology which have brought us high resolution and high quality range images, enable us to
use an exact measurement of curvature to classify the points into more complex surfaces types corresponding to those of
the CAD model. The necessity of exact curvature measurement has an implication on our curvature estimation scheme as
shown below.
While this paper focuses on the feature extraction process, the object recognition itself is not shown here. We have
previously reported on our approach using constrained tree search to match scene and model features. A similar approach
can be combined with the feature extraction presented in this work.
2 DERIVING FEATURES FROM THE CAD MODEL
In this work we use Pro/ENGINEER, a widely used solid modeling CAD system to perform CAD related operations. The
system has an application interface, which allowed us to integrate our own software into the system. Interfacing to the
system relieved us from some tedious programming work, such as reading CAD files and identifying individual surfaces.
While the implementation is specific to the system, the basic idea of our work is general to all CAD data. Figure 2(a)
shows the integration of our software into the user interface of the CAD system.
A CAD model typically consists of several individual surfaces which were generated during the design process. We have
implemented a routine which iterates over all surfaces of the model. For each surface we output the surface ID, the surface
type and the curvature characteristics. For the curvature we concentrate on the mean and Gaussian curvature H and K.
Mean and Gaussian curvature form a two dimensional space, let us call it the HK space. Each surface has a distinct
footprint in the HK space. Some surface types such as plane, cylinder and sphere occupy only a single point in HK space,
i.e. the mean and Gaussian curvature is constant across the surface. Others occupy areas of arbitrary shape in HK space.
For example the mean and Gaussian curvature of a torus lie along a line in HK space. The values of mean and Gaussian
curvature can be displayed in a two-dimensional plot as seen in figure 3(a).
During iteration when we encounter a surface of constant curvature we compute the curvature according to the geometric
parameters. In detail this generates the values (0,0) for a plane, (1/2 * R,0) for a cylinder and (1/R,1/R?) for a
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part BS. Amsterdam 2000. 77