The placement of the training area masks is this:
200 300
nz=2757
Figure 6. Mask of the training areas
My processing environment was the mathematical software
package, MATLAB, developed by Mathworks Inc. I have
written all the training area manipulating and classification
procedures and the greatest part of the procedures of neural
networks are also my products. Later the Neural Network
Toolbox from Mathworks Inc. was available for me, so I used
its procedures, too.
Let me begin with the minimum distance method. The essence
of the method is sorting the pixels after the distance measured
from the several classes. A pixel has a distance from the
thematics water
d
water
= S. (X E x ) (5)
where
toy distance from the class water
X intensity vector of the pixel
Xwater mean vector of the class water.
The maximum likelihood method decides after the highest
probability. This method is the nowadays used best traditional
method. During the calculation the mean values and
covariances of the thematics’ are to be considered (Barsi,
1994).
The accuracy of the methods is shown in the comparison of
traditional and neural techniques in chapter 3.
2.2. Classification by feedforward neural networks
Firstly the network had 6 neurons in the first layer, 4 in the
second layer and the network error (Sum Squared Network
Error — SSE) was 0.01 (ne2 model). The training material
contained the mean vectors of the thematics’. Because of the
bad classification results I analysed the bandwise distribution of
the training vectors. The histogram of the class meadow has
two peaks in band 5 (Barsi, 1995).
50
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996
Histogram of the Meadow in Band 5
Figure 7. Histogram of the meadow pixels in band 5
The figure shows that the training area wasn't homogenous. If I
split the area into two parts and train the network so, I get much
less error (ne2 2).
The accuracy can be increased by using three neuron layers.
The layers of the model ne3 contain 6, 5 and 4 neurons (SSE =
0.001).
But the increased amount of neurons doesn't cause increased
accuracy, e.g. in the model ne3 2 (12, 6, 4 neurons) or in the
model ne3 3 (6, 8, 4 neurons).
All the previously mentioned methods were trained by the mean
vector of thematics'. Let me see what will happen if I'll use the
pixels of the training areas as training vectors!
All training areas have altogether 2757 pixels. I've selected
every tenth of them. I’ve done this selection for two reasons:
1. I can train the networks with the pixels, but the
training material isn’t so giant.
2. There will be such pixels, about which I know
well their belongings, but they haven't taken part
in the training. I can use these pixels for
controlling the classification.
The model ne4 has 6, 5 and 4 neurons, SSE was 0.0001. This
model shows so nice accuracy that I was interesting how does
the classification a 2 layer neural network with the same
training material. The result was surprising in model ne42 (12,
4 neurons) (Figure 10.) and in model ne43 (24, 4 neurons) (—
chapter 3.). In both cases the sum squared network error was
0.0001.
2.3. Classification by radial basis network
The design of a neural network with radial basis transfer
function has a little difference from the customary
backpropagation networks. In this case on purpose to get exact
learning of the training vectors the algorithm defines also the
necessary amount of neurons. My radial basis network had 2
layers, 275 neurons in the first layer with a transfer function
like in Figure 3. In the second layer there were 4 linear neurons.
The training of radbas network was accelerated: while a
backpropagation network needs 4123 epochs (nearly 21 million
floating point operations) to learn the training material, a radial
basis network require only 5 epochs (~ 38000 flops) (Demuth,
1993).
At first in
testpixels c
pixels of th
ne3 3 hav
calculated :
was every t
In tablefor
(Table 1):
ne
ne
ne
ne
ne
ne
ne
ne
ne
Table 1. 7
testfield ha
Error X
Figure 8. !
Taken the
how the m