their mutual
mage model
S between
a feature
ion delivers
omogeneous
se points to
owever, the
lines and
> course of
extended to
jentation
rameters
indence
ndence
ore images,
atures from
3.4). Finding
ops (Gülch,
neses which
2 orientation
a model of
under the
ırface model
ent with the
The result of matching is a list of points which are
consistent with the object model and quality measures of
the fit between data and model. A final quality check shall
make sure that an adequate model has been used. A bad
fit of the measured points to the object model might
indicate that a wrong object model was used. In this case,
another model should be selected. Both generation and
evaluation of correspondence hypotheses are the main
topics of research work in our concept.
3.2 Feature Extraction
Many feature based matching algorithms for photo-
grammetric surface reconstruction use the Fórstner
operator to extract distinct points from digital images
(Krzystek, 1995). On a symbolic level, the image is then
described by an unstructured cluster of such point
features. Evidently, a considerable amount of information
contained in the images is thrown away. We think that
this information, e.g. line information, but also
information about the mutual relations between the
extracted features, should be used in order to increase
the reliability of a matching algorithm.
We thus want to use the more complex image model
proposed in (Fuchs et al., 1995) which was originally
designed for automatic building extraction. In this model,
the ideal image is assumed to be composed of
homogeneous segments, piecewise smooth boundary
lines of these segments and points. The digital image is a
blurred and sampled version of the ideal image which is
additionally afflicted by noise. Thus one can no longer
speak of finding lines and points in the image, but more
reasonably of regions containing line segments or points
(figure 4a); (Fuchs et al., 1995).
Figure 4: Image regions (a) and region adjacency
graph with direct (full lines) and indirect (dotted lines)
neighbourhood relations (b); adopted from
(Fuchs et al, 1995)
Each pixel can be classified as belonging either to a
homogeneous region S, to a region P containig a point or
to a region L containing a line (figure 4a) using a
measure for homogeneity and a measure for isotropy of
texture, both of which can be derived from a local function
of the grey levels. From a segmentation of the classified
image, all regions are extracted and a region adjacency
graph is created which describes the topological relations
between neighbouring regions (figure 4b); (Fuchs et al.,
1995).
At the same time as the region adjacency graph is
created, attributes can be assigned to the extracted
regions such as the subpixel position of points, the
average grey level of homogeneous regions, curve
parameters, e.g. spline coefficients, 'for lines, etc. These
attributes will become very important for the creation of
695
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996
correspondence hypotheses. As the subpixel estimates
for point coordinates (and, perhaps, in a future step,
curve parameters of lines) will be essential for the
description of the object, we refer to the region adjacency
graph as ‘feature adjacency graph’ although it also
contains homogeneous regions.
3.3 Image Geometry
Many matching algorithms use epipolar images, e.g.
(Gulch, 1994); (Krzystek, 1995). In case epipolar images
are used, the matching problem can be reduced to a one-
dimensional problem. This strategy considerably reduces
the complexity of the matching algorithm. However, in our
concept we do not use epipolar images for mainly three
reasons:
e By using epipolar images, one is restricted to using
stereo image pairs for matching. We eventually want
to use more than two images for that purpose.
e Small errors in the orientation parameters of the
images might deteriorate the matching result,
especially when lines which are almost parallel to the
epipolar lines are used.
e Epipolar images are derived from the original images
by resampling. Feature extraction may be influenced
by the lowpass characteristics of resampling methods.
Instead of using epipolar images, we will rely on bundle
block geometry. The bundle block adjustment system
ORIENT (Kager, 1989) will be integrated into the
matching software to be developed. Bundle block
geometry will be used to establish geometrical constraints
as well as for the formulation of models for the local
object surface.
3.4 Generation and Evaluation of Correspondence
Hypotheses
The generation of hypotheses for the correspondence of
features from different images is based on some measure
of similarity between these features. This similarity
measure is based on the comparison of the feature
attributes which have been extracted. If the viewing
directions are nearly parallel, the correlation coefficient of
the grey levels in a small region surrounding the point can
additionally be used as a similarity measure. We also use
the feature adjacency graph for that purpose because we
assume that a correspondence between features from
different images is more likely if the neighbouring image
regions also show similar attributes (Zhang et al, 1992).
Up to now, we have not yet decided which feature
attributes will be used and how the similarity measure
shall be composed. These questions are among the main
topics of our research.
By just using similarity as a criterion for the generation of
hypotheses, one would get too many wrong hypotheses.
The number of initial hypotheses is reduced in two ways:
e Introduction of geometrical constraints: Only features
with image residuals smaller than a certain threshold
may correspond to the same object point.
e Reduction of search space by approximate values.
The algorithm is successively applied to relatively
small homologous image patches which are extracted
according to the approximate values. Additionally,
thresholds for local height differences can be
introduced.