Full text: XIXth congress (Part B7,1)

red 
ing 
ext 
sed 
| of 
CA 
this 
has 
ible 
Was 
on 
ilt- 
  
The classification procedure started with the selection of 
representative ground truth pixels. Considering homogeneity 
2.6 % of all pixels were selected. Following the usual rule of 
thumb, 2/3 part was the training set and 1/3 part the test set. The 
design and training of the neural networks had only used the 
training set, while the test set is created for independent quality 
measurement. The content of the training set is shown in 
Figure 2. 
  
Figure 2. Content of the training set 
2.2. Neural thematic classification 
The thematic classification procedure can be realized by several 
types of artificial neural networks. The mostly used types are 
the feed-forward networks. These networks get the inputs — in 
our mapping process the intensity values of the satellite image — 
let these values through the layers and produce the output, 
which is a class membership. The only requirement to make 
correct decisions is the existence of exact network parameters. 
In the experiment the independent intensity values were present 
as network inputs. From the variety of possible neural network 
structures l've selected one, which could process the raw 
intensities, i.e. there was no need for previous coding or e.g. 
binary conversion. The second point of view in the network 
type selection was the computation speed. To find the correct 
network parameters, the training could take long, therefore 
efficient network structure, and adequate learning and training 
algorithm was searched for. The selection of the network's 
transfer function belongs also to this design phase. The network 
shall produce a list number that represents directly the thematic 
class. It was a further selection criterion, which is important for 
the output layer. 
Feed-forward neural networks can be determined by the 
training. The training of these networks is backpropagation. 
Backpropagation is an iterative training method, where in the 
first step random network parameters (neuron weights and 
biases) are selected. The following repeated steps calculate the 
network's output, the required output is compared to the 
calculated one, the difference (the so-called network error) is 
computed and at the end these differences are “propagated 
back" to the network parameters. The most important moment is 
therefore the modification of the parameters. The calculation of 
the parameters’ change requires the differential function of the 
transfer function. The easier the calculation is the faster is the 
training. 
After all the mentioned reasons the following network structure 
was chosen. The proposed neural network had three neuron 
layers. Authors have pointed that most of the technical problem 
could be solved by such networks and the complexity is yet 
acceptable. The transfer function of the first and second 
(hidden) layers is the tangent sigmoid (tansig), in other words 
the tangent hyperbolic transfer function. The formula of the 
calculation is 
Barsi, Arpad 
f (x) = tanh(x) = e C. 2 x 
=———1 
e +e* l+e 
  
(1) 
and the derivative function is not too much difficult 
^ d, X 2 
f 9 1 [Fo] 
dx 
(2) 
The transfer function of the last (output) layer is linear 
(purelin), so the network was able to produce an output in a free 
interval. 
f(x) = x 
G) 
The derivative function of the last transfer function is constant, 
namely 1. While using a linear output layer the desired class 
membership could arise on a single neuron. It means that the 
last layer had only one computing element. 
The goal of the current experiment was to bring expression on 
handling e.g. neighborhood information, which increases the 
dimensionality of the training set, therefore a very effective 
training mechanism is essential. From mathematical 
optimization the Levenberg-Marquard (LM) optimization was 
selected. The LM-algorithm is a fast training method, it requires 
large memory, but the training is really quick. This method is 
realized in the applied Toolbox with an extended memory 
management option: the usage of the memory is scalable, 
depending the need and existence of computer RAM. 
The network error was calculated with the mean squared error 
(mse) performance function. The learning has applied the 
gradient descent learning function with momentum/bias option. 
The option makes possible to force further learning when a local 
error minimum is found. 
The training is repeated till a desired error goal is reached. The 
goal value is an important designing parameter. 
The designed neural network had 7 inputs as LANDSAT TM 
has 7 channels. 
The computation of the output is after following formula 
y = f,(W, F(W, AW, :X+b,)+b,)+b;) 
(4) 
where fj, f,. fs are the transfer functions, W,. W,, 
Wi,the layers" weight matrices, b,, b,, b, the layers? bias 
vectors, X the input intensity vector and y the output class 
membership. The training procedure deals with the 
determination of the correct values of the unknown weight and 
bias values. The design of such networks is also iterated: several 
layer structures (different number of neurons per hidden layers) 
must be evaluated, till the right structure is found. 
The simulation of the neural network produces the output. The 
usual realization is the pixel-wise method. This means that the 
network gets its inputs pixel by pixel. The second possible way 
uses the matrix arithmetic, which is one of the most powerful 
tools in Matlab. The matrix algorithm gives amazing 
computation speed. Comparing to the pixel-wise solution, the 
second method is 197 times faster. This measure was reached in 
the classification of 10 000 pixels. The more optimal and final 
implemented solution gets pixel blocks, computes the output for 
  
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B7. Amsterdam 2000. 141 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.