Distance as the "Cost" in Euclidean Space for
similarity assessment. Because the distances
behave as SIN function of angle between the fea-
ture vectors in case the distance is small, it
offers a sensitive measure for the distinction of
close similarity in the primitive feature space,
In the Least Squares Matching Method, though the
approach of matching is totally different, the
basic idea is similar (Min. Zvv <--> Min. dis-
tance), namely matching with very high accuracy to
the degree of subpixel; it requires, however, a
very good conjugacy prediction (coarse DEM). The
commonly used Normalized Cross Correlation method
provides a correlation function behaviour as a COS
function [LO,1991]. This function has a flat peak
at maximum similarity; therefore, to interpolate
this function to get subpixel accuracy is hardly
worth the effort.
C) When we use the unknown parameters to establish
a mathematical model (Observation Equations) which
can precisely describe the phenomena of observa-
tion/ sampling (e.g. the back mapping of intensity
from image space to object space), the Least
Squares Method minimizes the differences between
the description model with actual model to deter-
mine the value of unknowns ( when the observation
equation is linear ), or to update the estimated
values of unknowns iteratively (when the observa-
tion equations are non-linear). Every pixel
involved in matching can provide one observation
equation; if redundant observations exist (the
number of observation equations is more than the
unknown parameters, i.e. the over-determined
problem), the Least Squares Method is a good
method to determine the value of unknown parame-
ters even when the errors of observations are not
a Normal Distribution. The principle of the
Maximum Likelihood Method is identical with the
principle of Least Squares Adjustment, if we have
assumed that the errors of observations are a
Normal Distribution. But unlike the Maximum
Likelihood Method of estimation, the method of
Least Squares does not require the knowledge of
distribution from which the observations are drawn
for the purpose of parameter estimation; however,
for testing of hypotheses, we would require the
knowledge of the distribution [Bouloucos,1989].
d) Least Squares Window Matching has been used in
the two-step approach whereby matching in image
space is performed to get the conjugated position
first, then the Space Intersection is used to
reconstruct the object surface [Ackermann,1984].
Its window size is limited to 20x20 pixels or
30x30 pixels; if the window is smaller than this,
reliability is decreased, but if it is larger than
this, accuracy will be poor because matching is
performed on image space, and the geometry model
is simplified [Rosenholm,1987a]. The improved
approach unifies these two steps into one, and
matching on object space. Back mapping of image
intensity into object space to get object reflec-
tion D(X,Y)(image inversion) is done by referring
the coarse object surface Z(X,Y), then perform
matching on object surface and refine the coarse
object surface iteratively. In addition to this,
two functions in the object space, i.e. the object
surface Z(X,Y) and the object reflectance D(X,Y)
are simultaneously determined (considered) in one
solution with Least Squares Adjustment; this is
the reason why we consider this to be a more
rigorous method than the earlier methods.
e) By referring the coarse object surface Z(X,Y),
there are two ways to back mapping of the image
intensity into the object surface in order to get
the estimated object reflection D(X,Y). One is
called Directed Pixel Transformation: it starts
with pixel position (x,y) in the image; the
Collinearity Equation with orientation parameters
of its scanner is used to intersect the coarse
object surface Z(X,Y) to get the position of
corresponding groundel X,Y, and transfer the
intensity of this pixel to it. However, the prob-
lem is that the groundels distribution on object
surface after back mapping present a random pat-
tern and they are different in different multi-
view images also; another shortcoming is that the
height of these random position points needs to be
interpolated to grid DEM but lose information.
136
The other way is called Indirected Pixel Trans-
formation: we start with a coarse DEM grid
(X,Y,2), and the Collinearity Equation with orien-
tation parameters of the scanner is used to get
the corresponding pixel position (x,y). Another
problem arising from SPOT's Push Broom scanner is
that the orientation parameters of each scan line
are different (for a frame camera, it is the same
in the whole image); therefore, if we don't know
which set of orientation parameters we should use
for transformation, we can not use the
Collinearity Equation to get the corresponding
pixel on the image and transfer its intensity
(after resampling) to that grid point. If we can
solve this problem, the Indirected Pixel Trans-
formation is better than Directed Pixel Transform-
ation, as the weighted average of pixel inten-
sities which are obtained from multi-view images
according to the same grid point, can be used as
the estimated object reflection D(X,Y), and the
weight is assigned according to the slope of ray.
f) The Least Squares Matching method is capable to
handle any number of images over two, e.g. the
Triplet, as well as images scanned in various
spectral bands simultaneously. It increases both
reliability and accuracy of the result [Shibasaki
& Murai,1988].
g) The traditional method uses windows of pixels
for matching to determine a single point (usually
the middle point) only, but Object Space Least
Squares Matching uses a window of pixels for
matching to determine multi-points in a grid
pattern DEM in one solution. If preprocessing
provides prior knowledge about the quality of
matching windows (e.g. the gradient of intensity),
we can assign different weights accordingly (e.g.
give the high contrast pixels a larger weight) in
the Least Squares Adjustment. This means that we
require the high contrast pixels to offer a larger
contribution for the decision making which helps
to avoid making the wrong decision in the homo-
geneous part of the image. Thus, a combination of
advantages from Feature Based Matching and Area
Based Matching can be obtained. DEM determination
executed in this way is thus called Multi-Point
Matching, and offer higher reliability [Rosenholm
,1987b].
h) Robust Estimation techniques can be applied in
Least Squares Adjustment to get rid of noise as a
gross error of sampling (observation).
i) The Least Squares Method provides theoretical
quality estimation of the matching result based on
statistics theory. It also offers useful informa-
tion for cleaning gross errors in DTM data in the
postprocessing stage as quality control, as well
as for using the DTM data in GIS [Day & Muller
£1988].
j) The grid DEM is separately generated patch by
patch with the quality estimation. The aggrega-
tion of the whole DEM can be done, e.g. by means
of the Finite Element Method, with a quality
improvement of DEM in the last stage. [Xiao et
a1.,1988].
4. SUMMARY
4.1 Summary of the Matching Algorithms and the
Selection of Approach
There are many matching algorithms that can be
used; we summarize the relevant algorithms, and
indicate those selected and applied in this system
with the mark of ***,
(a) Information for Matching:
*** Intensity-Based
*** Feature-Based (Property-Based)
(b) Criterion for Similarity Assessment:
* Angle between Matching Vectors:
COS Function --->
Less Accuracy/ High Reliability
*** Distance between Matching Vectors:
SIN Function(in small distance) ---»
High Accuracy / Low Reliability
(c
**
**
**
**
**
53 4tp53pc£O