under bad
ce. All teeth
X15 pixels.
asured. The
from the
f the shift
ns of 0.193
than those
conditions.
t standard
|| powerful
osition and
better than
sr than 0.05
ficial target
vided that
js of teeth
interesting
niques. The
re shift and
have to be
| techiques
techniques
lents of the
eth three-
portunity to
even if the
potential of
ations.
a possible
XXIX, part
sometrically
1 Nr. 49,
H Zurich.
ric Analysis
lose Range
eodesy and
jestimmung
Institute of
uracy edge
C-matching
Metrology,
guided and
ic dentistry,
OBJECT RECOGNITION FOR A FLEXIBLE MANUFACTURING SYSTEM
Y. Huang & J. C. Trinder & B. E. Donnelly
School of Geomatic Engineering
The University of New South Wales
Sydney NSW 2052, Australia
ISPRS Commission V, Working Group 3
KEY WORDS : Edge, Extraction, Object, Reconstruction, CAD, Model and Identification.
ABSTRACT
3D object recognition is a difficult and yet important problem in computer vision. It is a necessary step in many industrial
applications, such as the identification of industrial parts, the automation of the manufacturing process, and is essential for
intelligent robots equipped with powerful visual feedback systems. In this paper, a complete procedure is described to recognise
3D objects, using model-based recognition techniques. Objects in the scene are reconstructed by digital photogrammetry, while
models in the database are generated by CAD system. A detailed comparison between the potential matching graphs of an object
and a model determines the identification of the sensed object, its position and orientation.
1. INTRODUCTION
Digital photogrammetric procedures of machine vision are
being investigated for their application in a flexible manu-
facturing system (FMS). Flexible manufacturing enables
multiple products to be fabricated on a single assembly line
under computer program control. The system is managed by
work transfer robots which are required to recognise objects, as
they pass along the assembly line, and to determine the next
appropriate action that should be taken on them. For the
recognition of objects, it is necessary to extract visible features
on multiple digital images of the object by image analysis
procedures. These features form the basis of the reconstruction
of the objects in terms of 3-dimensional geometric primitives.
This representation of the object is then compared against
entities in a model database, which contains a description of
each object the system is required to recognise.
The development of such model-based recognition techniques
has occupied the attention of many researchers in the computer
vision community for years (Besl and Jain, 1985; Chin and
Dyer, 1986; Brady et al., 1988; Fan, 1990; Flynn and Jain,
1991). Many machine vision systems developed so far have
been mostly based on range images which contain direct 3D
properties of objects. Using range images, the ambiguities of
the feature interpretation which usually occur in an intensity
image, such as shadows, surface markings or illumination, are
eliminated. However, an intensity-based vision system is still
acceptable not only because of its relevance to biological vision
but also because of the robustness of passive sensing for
industrial and other applications. There are a number of
advantages in the use of intensity imaging system, including:
the intensity data is viewable by an operator and can reveal
more than geometric information, eg. colour, texture,
blemishes; features such as edges and faces can be extracted
from the object by image processing, provided that these
features are apparent in the image; lighting can be varied to
accentuate various elements in the object.
One problem of object recognition is related to the
representation of models in a database and objects in scenes.
The representation of models should be compatible with the
description of the sensed object, so that the matching of
elements from models and objects can be identical. One can
match objects with models at many different levels or
descriptions with some tradeoffs: the lower the level of a
description, the easier it is to compute them. However, such a
description is not invariant to viewing directions, which makes
it difficult to find correspondence between objects and models.
The higher level descriptions, on the other hand, maintain their
invariance but the known algorithms to compute them are often
weak and error prone (Fun, 1990). The appropriate level of
description to be used for matching, thus depends on the
expected variations in the scenes and on the state-of-art in
computing descriptions of models.
CAD |. Geometric], Matching Scene Interpretation: Object
System Inference Module Identify, Position and Orientation
A
= Line
~ | Segmentation
"Reconstruction
of 3D Objects
Model Database
Edge
CCD
C y ACcp Detection
DE
— Edge
C ;
AACCD | Detection
Line »| by Matching
»| Segmentation
Figure 1 : Components of an object recognition system
253
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B5. Vienna 1996