Full text: XVIIIth Congress (Part B7)

lifferences 
activities 
daries for 
uld differ 
£e. from a 
the same 
les. 
'OSt-object 
Insterdam 
est result, 
ased post 
hand side) 
likelihood 
> previous 
and rules 
d control 
IS stage as 
d at same 
  
3,2 Neural network in pixel-wise classification 
The main advantages with the neural network as a 
classifier, for this project was the ability to handle 
complex data patterns as it is nonlinear. In this case the 
complexity in the data patterns concerned the variation 
between the image channels on one hand versus the slope 
and aspect channels on the other hand. For some of the 
subclasses it has been noticed that signatures for the near- 
infrared channel and slope and aspect channels have 
spread and complex patterns, which had not been solved 
by the subclass division. 
The same subclasses of the training fields that were used 
with the maximum likelihood classifier were used to train 
the neural network. The values for the 5 input channels 
were normalized to a range between 0 and 1 in order to 
speed convergence to the minimum error point in the 
network. An equal number of pixels in each class would 
have given the Neural network its recommended balanced 
data sets. But because of the comparison of the methods 
exactly the same data sets are used. 
Several tests with different combinations of 
learning-factors number of nodes in the hidden layer were 
conducted in the standard back-propagation training 
process. The settings of the variables are plausible and the 
guidelines from other projects only partly gave the same 
results. When it comes to the learning factor there are 
different recommendations in the literature. We got best 
convergence with a decreasing learning factor from 
approximately 4 to 0.1 for net with moderate number 
nodes in the hidden layer. But for larger net with around 
100 hidden nodes a stable low (0.5) learning factor seems 
to give the best training result. Among others Skidmore et 
al. 1994 pointed out that even if the percentage of 
correctly classified training patterns has a slight tendency 
to grow as the number of hidden nodes is increased. 
There is an almost opposite tendency for the percentage 
of correctly classified test data that indicates an optimal 
number of hidden nodes as being quite low. 
À "program" called Stuttgart Neural Network Simulator 
(SNNS) was used to conduct the neural network 
classification . SNNS is distributed by the University of 
Stuttgart as "Free Software’, for more information see the 
User Manual [Zell et al. 1994]. A three-layer feed-forward 
net with 5 input nodes, 100 nodes in the hidden layer and 
10 output nodes was found to give the best training results 
and thereby used in the classification. 
32.1 Training results from Neural Networks. Dalen 
1995 has done some tests with the same training data sets 
4s used in this project. He got an overall training result of 
86% and for the individual classes a result from 65-95%. 
This best result was achieved with a network with 20 
hidden nodes. He used 1200 iterations where the learning 
factor decreased from 4.0 down to 0.1. For his limited test 
545 
data sets the overall results were 1246 and the total of all 
stands in the individual classes ranges from 0-51% 
correctly classified. In these data sets there is no 
separation into areas with low and medium-high site 
classes as been done in section 4. 
In this project the same training data sets were used, but 
for the test data all the coniferous stands in the control 
area were used. For nets with moderate number of nodes 
in the hidden layer the best result was for a net with 25 
hidden nodes. The result for this net was 86% correctly 
trained when only the highest "score" counts (winner 
takes all). This was achieved after 1050 iterations with a 
decreasing learning factor. 
As complex signature patterns necessitated the 
examination of neural networks, larger nets with more 
hidden nodes were also tested. The best training results 
was achieved after approximately 10000 iterations and 45 
hours on a "Sun Sparc-station LX". The training-results 
was over 91% correctly trained (winner takes all) or over 
87% if the 40-20-40 method is used in the training 
analysis. 
4. RESULTS AND DISCUSSIONS 
First the results from phase 1 with the maximum- 
likelihood classifier in the post-object classification are 
presented. The coniferous stands in the control area have 
to be divided into two groups to get any meaningful 
analysis at all. In one group that consists of areas with 
mainly low-productive stands (low site classes), hardly 
any of the stands were correctly classified. The spectral 
reflections in these stands are totally dominated by the 
background vegetation and the trees are too spread to 
make a dominating influence on the reflection signals. 
The other group consists of stands with medium-high site 
classes and here the results are better. The total 
classification accuracy in this area is approximately 50% 
in the correct class, 30% in the correct super-class and 
2096 wrongly classified after the post-object classification. 
Among the stands that were not correctly classified are 
almost all the small stands. The rest of the wrongly 
classified stands are dominated by “spectral” 
heterogeneous stands. Many of these have physical 
variations in topography, crown coverage and/or amount 
of broad-leaf trees. 
The pixel-based classification results are given here for 
two interesting groups of stands: homogeneous and 
heterogenous stands in the area dominated by medium- 
high site classes. For the homogeneous stands 70-95% are 
correctly classified and for the problematic heterogeneous 
stands they vary from 5 to 30%. These heterogeneous 
stands with complex signature patterns (see Section 3.2) 
were the direct reason for phase 2. 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B7. Vienna 1996 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.