Adopting proper price coefficients for the cost function
is of importance to obtain reliable results. As shown in
Eq. (13), the price coefficients form the strength of
connections between neurons and the constant input of
each neuron. Conceptually, one can imagine them as
the weights can be adjusted to adapt the system to
various conditions of applications. For example, if we
think the factor of shape similarity is much more
important than the other factors, we can give a large
value to C,. However, although the concept is plausible,
there are no rigorous rules to tune the system to be
optimal.
The settings of the thresholds in Eqs. (8) and (9) are
also important. They can be adjusted to fit the
conditions of the applied images. The threshold of
shape similarity is set for the disturbances of image
distortion and noises, and the thresholds of orientation
consistency is set for the geometric changes due to the
different view angles of cameras.
4. DETERMINING CONJUGATE POINTS
After conjugate features are matched, the approximate
relative orientation can be solved by using the
coordinates centroids of conjugate features. This
allows us to narrow down the search windows when we
apply template matching to determine conjugate points.
In this application, the normalized cross-correlation
(NCC) is used to determine conjugate points up to one-
pixel accuracy. Then sub-pixel accuracy is reached by
using the least-squares matching (LSM)) [Ackermann,
1984].
In order to obtain reliable matches, each template
should contain enough gray-level changes. Also evenly
distributed conjugate points on the overlapped image
area is required to obtain reliable relative orientation.
These requirements can be achieved by using, interest
operator to locate locally most interest points as the
template locations.
S. RESULTS
Several pairs of aerial stereo photographs have been
tested on the system. Fig. 5 shows one of the pairs of
the test photographs. The photos were digitized in the
resolution of 600 dpi which is corresponding, to the
pixel size of 42.3|um square.
By using the technique of region growing, homogenous
areas are segmented from each image. There are 23
features derived from the left image and 34 features
derived from the right image (Fig. 6).
Conjugate features are determined by using the
technique of matching, Fourier descriptors described in
section 2 and 3. After 5 iterations of neural network
computation, the final status was reached and 10 pairs
of conjugate features were discovered. The matched
pairs are 4-4, 6-9, 8-11, 9-12, 11-15, 12-17, 19-22, 20-
23, 21-24, and 22-28. By checking the images visually,
we can discover that they are all correct matches.
By using interest operator and template matching
techniques 150 conjugate points were discovered. After
the computation of relative orientation, 16 conjugate
pairs were eliminated due to their residuals of y
parallax are three times larger than the mean square
error. Finally 20pm of the root-mean-square error of y
parallax is obtained. The overall computation time was
about 30 minutes when it is executed in a Pentium 90
computer.
Figure 5: An example of test image pair
884
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996