221
provides the THINEDGE board with digitized grey level
CPU pictures. By means of two 8 X 8 convolution kernels and
a lookup-table the THINEDGE board computes the
{J magnitude and local direction of the gradient. The
d MEUS S resulting contours are passed through a rule-based
pre T 5 thinning algorithm and output to the VECTOR processor,
ü 17 == = 31 "1 which extracts a list of vertices for each contour. The
IPP | THINEDGE VECTOR | | vertices may be regarded as start and end points of
Xe | contour finder vectorising | , vectors, which make up a polygon approximating the
rm | — vA contour. The vertices of each polygon and the links
| | | rn | | r4 | between contours are dumped to a FIFO buffer, which
| | Vibus = } Vibus | can be accessed from VMEbus.
Ley rw
Fig. 2 illustrates an example for the computation of an
Fig. 1. The hardware structure of the vision system. objects une from 2 digiized grey evel picture. The
contour image shown in Fig. 2b directly results from the
image
amera
rder to
nents,
olygon
ed on
by use
uation
naging
ustrial
unt of u:\rafhér<Goc na 1ic\penoz one U:\RAINER\DOC GRAF 1C\DENO4 GAP
planar a. digitized grey level image b. contour image c. polygon vertices
Rt Fig. 2. Extracting and locating the 2-D outline of an object.
thods,
other VECTEX hardware. Further processing relies on software tools.
anical : First, as the picture is part of an image sequence we are able to
mage define a region of interest, where we expect to find the object
ations se contours. Secondly, noise is reduced by merging nearby polygons
edam NO and eliminating short contour segments. In a third step, edges are
Behram located with subpixel accuracy using polynom interpolation in the
ene Is aet 27450 gradient image. Finally, the coordinates of the polygon vertices in
tween AAA ER ue o6 2 the image plane are computed as the intersections of adjacent
ion of polygon segments.
uence
e 2D Because the estimation of the 3-D structure of the object contours
S well E is based on an extended stereo approach we need to establish
unt of correspondences between polygon vertices in subsequent images.
ability As shown in Fig. 3 the algorithm tracks the polygon vertices in the
image plane in order to identify those image points that correspond
ee time [sec] to the same object point in each of the images of the sequence.
0^ 342 532^ 1055 The tracking is based on a robust 2-D polygon matching algorithm,
K:\G6RP_KALM\ho1z2_19.are| Which is invariant under rotation and translation in the image plane
Bin Fig. 3. Feature tracking in the image plane. (Otterbach et al., 1 994b). The parameters ft, - and rot shown in
; the upper left of Fig. 3 indicate the 2-D transformation of the object
vs in the image plane as computed by the tracking algorithm.
| Po
995 IAPRS, Vol. 30, Part 5W1, ISPRS Intercommission Workshop “From Pixels to Sequences”, Zurich, March 22-24 1995