present in Figure 4 have now been removed. There are
no visible patterns within the directions of the residual
vectors. The RMSE for the control points is 61 metres
and the RMSE for the check points is 73 metres. These
values can be compared to those from Figure 4 of 156 m
and 160 m for the control and check points respectively.
980000
970000
960000
950000
940000
930000
920000
910000
900000
890000 ppp
200000 220000 240000 260000 280000 300000 320000
340000
Eastings (m)
Figure 6 Residuals (hybrid network)
residual vectors @ scale 1:50
5.1 Learning versus Recall for Hybrid Networks
Figure 7 presents three curves illustrating the nature of
how the geo-referencing error decreases with the amount
of training. The first curve shows the progress in network
learning using 4 control points for the neural network
training. This curve starts with the largest error, however
after 20 000 iterations the error has reduced to a similar
value as the curve displaying the check point residual
RMSE using 7 check points.
Wh 4 Control Points — —7 Check Poinls — All Points
_ 1800
E 16004. Training Curve
S 14004 -
wu 4200. ^
€ 400044 '
5 N t
$ 800. M
$ so. ds
E39 Thaler
© 200 Lit: Recall Curve
© m E uo
T
T T T T
0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000
Number of Iterations
Figure 7 Learning and Recall Curves for Hybrid
Network
The check point curve is indicative of the network's ability
to recall the geo-referencing function should the training
process be halted. It represents the hybrid network's
ability to approximate the geo-referencing function up to
50 000 training iterations. The graph closely resembles
that of the training curve. Initially, the recall curve
produces less error than the training curve that uses the
control points. This feature is, however, only present over
a limited domain (up to 20 000 iterations) and once the
curves begin to flatten off, they both stabilise to the same
geo-referencing error of ^60 m.
The rule base estimate was produced using GCP 1 (as
highlighted in Figure 5). The set of eleven GCPs were
used in the training process for both network
architectures, hence no check point data was available.
The results are presented in Table 2. The table compares
the two architectures' ability to learn the function and not
to recall the function.
GCP Neural Network (m) Hybrid Network (m)
Nor dE dN dL dE dN dL
1 -140 -203 247 87 15 88
2 -74 -102 126 -20 -39 44
3 -208 -201 289 62 50 80
4 186 68 198 18 -72 74
5 -117 -25 120 -70 6 70
6 56 46 72 39 23 45
7 60 24 65 7 -47 47
8 83 12 84 46 -3 46
9 7 -36 37 61 12 62
10 -34 -57 67 -8 -18 19
11 50 56 75 -8 2 9
mean | mean | mean | mean | mean | mean
dE dN dL dE dN dL
-12 -38 125 19 -6 53
Table 2 Comparing the Stand-Alone Neural Network
with the Hybrid Neural Network in Learning
Despite the mean values for dE being of similar
magnitudes for the two types of network, inspection within
the table, reveals a much reduced variance on the
individual dE values when using the hybrid network as
opposed to the stand-alone network. Furthermore, the
mean value for dN is far less for the hybrid network than
it is for the stand-alone counterpart (-6 m compared to -38
m for the stand-alone network). This feature also applies
to the mean residual, dL, which amounts to 53 m for the
hybrid network and 125 m for the stand-alone neural
network.
= m = Stand-Alone Hybrid Network
_ 500 —
E 450 :
5 4004 ;
Gg 350-4,
2 30 3:
9 2504 ı
$: 200 he!
Ko] lo
B 104 -
à 100-
© 50 A
0
0 5000 10000 15000 20000 25000 30000 38000 40000 48000 50000
Number of Iterations
Figure 8 Learning Curves for both Stand-Alone and
Hybrid Neural Network Models
The enhanced performance of the hybrid network can
also be shown by comparing its progress (in training) with
that of the stand-alone neural network. Figure 8 presents
the learning curves of the two networks in their training
816
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B4. Vienna 1996