can
| fit
line
sti-
rich
nal.
ing
not
sed
‘his
2 of
ons
ect
ods
the
ty
ich
res
Ti)
a | b
d
elf
Figure 7: The line extraction from the first image
(ZHENG and HAHN, 1990). The weight function p(r;)
can be, for instance, the Cauchy function defined by
1
N) = mr,
where c, is the standard deviation of the fit and can
also be estimated.
For the inference and reasoning process of a higher
abstraction level during feature extraction, it is de-
sired to know something about the quality of featu-
res extracted from low level processing. Many feature
extraction algorithms, however, lack a detailed and
comprehensive description of extracted features. Ac-
tually, after parameter estimation, it is also possible
to estimate the posteriori accuracy of the estimation.
As a measure for the global fittingness of data to a
model, the estimate
=D (16)
n—u
can be used, where wj is the weight and u is the num-
ber of the unknown parameters. Besides, the poste-
riori accuracies of the parameters in €) can also be
estimated and they are denoted by (09, Jp, 0¢, 0a, 75).
871
Figure 8: The line extraction from the second image
Now, the line support region illustrated in Figure 5
is used to estimate the parameters in model I. The
results can be stored in a form of knowledge represen-
tation known as frame (MINSKY, 1975) (cf. Tabular
1). It is to say that the accuracy of the line position is
about 0.4 pixel (subpixel accuracy) and the accuracy
of the line orientation is about 0.002 radian. The line
estimated in this way is shown in Figure 5a using the
white line.
6 Experimental Results
The algorithm described in the previous sections was
applied to two aerial images. Due to the limited space
of this paper, many interesting intermediate results
can not be discussed in this section. Here we just il-
lustrate the lines found by the algorithm.
The first image (cf. Figure 7a) shows an aerial scene
with a house and fence on rolling terrain. Due to sha-
dows and poor contrast, the roof borders get frag-
mented. The image was used to train the net illustra-
ted in Figure 6. After training, the net can automa-
tically group image pixels into line support regions