Barsi, Arpad
The quality of the neural network is measured on the test set.
The overall accuracy of the network was 0.3 % with the 3
wrong pixel. It was no sense deriving other accuracy measures.
The distribution of the thematic classes is following:
Class Amount %
F1 16709 15.3
F2 11173 10.3
Mil 6329 5.8
M2 9818 9.0
U 57584 52.8
W 7353 6.7
Table 1. Statistics of the thematic classes
The most frequent class was the Urban class. (Please remember
that the image covers Budapest and the agglomeration!)
The trained neural network has ordered all image pixels to any
thematic class; there was no rejection. The final network
structure was 7-14-1. The time need of the training was about
143 s. (It was measured on a Pentium 166 machine with 64 MB
RAM.) The second run gave the final network.
The thematic classification is shown in Figure 9.
Figure 9. Thematic map made from the original image bands
32. Networks with PCA
The design of neural network for processing PCA-transformed
image bands was slower. The desired error goal was at first
0.01, then 0.005 and at the end 0.001. The complexity of the
network structure was enlarged stepwise: from 7-10-1 to the
final 8-15-1.
It was interesting that the input dimensionality was reduced and
the training time had a variety from 77 s to 567 s. The final
training phase took 510 s. This moment proves the “black-box”
feature of artificial neural networks.
The final network reached (as before the original) the zero
training error threshold. The test result: 0.4 96 error. The
network has rejected 9 pixels.
The output map seems visually much smoother, it's a very high
quality map (Figure 10). The classes have more contrast. The
distribution of the classes are similar to the original version, but
the Urban/Meadow2 ratio has been changed. The PCA
classification has detected more M2 (19.5 %) and less U
(38.1 %). The other classes are very similar.
There’re some disturbing mixed pixel in the river Danube.
Figure 10. Thematic map made with PCA bands (95 %
information content)
The PCA is executed not only for 95 %, but a network series
were designed. The most important parameters are collected in
Table 2. The training data sets of these neural networks were the
transformed image bands in every case.
Number | Network | Number | Training
of errors in of time [s]
bands the test set | epochs
7 0.4 6 218
6 0.1 15 -
s 0.1 17 215
4 0.3 32 369
3 0.4 36 510
Table 2. Features of the PCA-networks
We can notice some regularity from Table 2. As it has been
shown, with 2 and a single transformed band no neural network
could be trained. The test results were optimal with 5 to 6
transformed bands (0.1 % error). With the reduction of the data
vector dimensionality, the number of epochs and the required
training time has been increased. The reduction of the data
amount means for the neural networks also information
reduction, therefore the training took longer. (They needed
longer “drilling”.) Training time for case 6 was unfortunately
not registered.
The whole series had the same accuracy for training data; no
mixed pixel has been arisen. The network structure was 7-14-1,
except the mentioned last case (normal PCA with 95 %
information), where 8-15-1 was. This structure seemed for
universal in the project. The desired error goal was also very
similar: excepting case 7, all goal values were 0.001. With all
transformed bands the acceptable error goal was at 0.0001.
3.3. Neighborhood in neural networks
The most difficulties were arisen in the designing and training
neural networks for handling the neighborhood information.
The reason was introduced yet: the extreme dimensionality of
the intensity vector.
In the 4-neighborhood case these vectors had a length of 35.
The initial parameter setting took longer, and also the memory
reduction solution of the Levenberg-Marquard method was very
helpful.
144 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B7. Amsterdam 2000.
f
an] be AY ped ba IM OPN 0 :-.... emi M 0. 30 MA