Full text: XVIIth ISPRS Congress (Part B3)

  
1. INTRODUCTION 
Artificial neural networks have been used for image pro- 
cessing and have shown great potential in classification 
of remotely sensed data. However, the amount of data 
necessary for training a neural network has not been 
addressed. Benediktsson et al. (1990) classified an 
image (135 x 131 pixels) using a neural network with 
the back-propagation learning algorithm. They trained 
with approximately seven percent of the image data and 
obtained a training accuracy of 93%. Hepner et al. 
(1990) performed a neuro-classification of a four-band 
(bands 1, 2, 3 and 4) LANDSAT Thematic Mapper 
(TM) image (459 x 368 pixels ) with four land-cover 
categories (water, urban, forest and grass). They used 
100 (10 x 10) pixels per category for training the neural 
network classifier. Two LANDSAT TM images were 
enhanced with a digital land-ownership data and then 
classified for crop residues (Zhuang et al, 1991; 
Zhuang, 1990). The neural network classifiers were 
trained with approximately ten percent of the TM data, 
and an overall accuracy of more than 90% was obtained 
for each classification. From these neuro-classifications, 
one to ten percent of image data were used for the train- 
ing of the neural networks. Therefore, the amount of 
data used for the training needs to be investigated. 
The objective of this study was to investigate the 
amount of image data necessary for training a neural 
network classifier. A LANDSAT TM image was 
classified with the classifier, and 5%, 10%, 15%, and 
20% of the TM data were used for the training. 
2. MATERIALS AND METHODS 
2.1 LANDSAT TM Data 
The LANDSAT TM scene used in this project was 
acquired 29 July 1987. The scene covered an approxi- 
mately 10.36 km? area (107 x 107 pixels), including 
sections 3, 4, 9, and 10 located in T28N, RSE of Rich- 
land township, Miami County, Indiana, U.S.A. Seven 
categories of land cover for these sections included 
corn, soybeans, forest, pasture, bare soil, and river. The 
ground observation data were provided for section 9. 
Aerial photographs from 1987 were available for this 
study area. The U.S. Geological Survey 1:24,000 topo- 
graphic map of the Roann, Indiana Quadrangle was also 
used as a reference. 
2.2 Neural Network 
The neural network used in this study was configured as 
a three-layer back-propagation network, including input, 
hidden and output layers. Adjacent layers were fully 
interconnected. The input layer was composed of an 
Nx8 array of binary-coded units, corresponding to N 
bands (N = 7 in this study) of the 8-bit LANDSAT TM 
data. Twenty units were assigned to the hidden layer, 
and six thermometer-coded units in the output layer 
referred to the six categories of land cover. With ther- 
mometer coding, for example, category 4 of the six 
530 
categories would be represented as 1 in four most- 
significant bits and 0 in the remaining two bits (4-11 1 
100). 
For the training of a neural network, the TM data were 
fed to the input layer and propagated through the hidden 
layer to the output layer, and then the differences 
between the computed outputs and the desired outputs 
were calculated and fed backward to adjust the network 
connections (weights). This process continued until the 
maximum of the differences was less than or equal to 
the desired error. Additional details of the network are 
given in Zhuang (1990). 
The neural networks simulator was NASA NETS 
(Baffes, 1989), which runs on a variety of machines 
including workstations and PCs. The simulator provides 
a flexible system for manipulating a variety of 
configurations of neural networks and uses the learning 
algorithm of the generalized delta back-propagation. 
The NETS software was run on SUN SPARC worksta- 
tions for image classification. Interface routines were 
developed to make NETS suitable for image 
classification (Zhuang, 1990). 
2.3 Neuro-Classifications 
The neural network classified an unknown pixel based 
on the knowledge learned from a training data set. We 
trained a neural network separately with 5%, 10%, 15%, 
and 20% TM data. Therefore, four neural networks with 
the same configuration were separately trained 
corresponding to the various percentages of training 
data. These four neural network classifiers were named 
NN-5%, NN-10%, NN-15%, and NN-20%. For the study 
area, training samples were selected for six land-cover 
categories based on the corresponding reference infor- 
mation, including the ground observation data, the aerial 
photographs, and spectral features from individual 
categories. The training data for category river were 
obtained by an unsupervised classification (clustering) 
of the portion of the image containing the river. 
2.4 Normalization of Classification Results 
With the iterative proportional fitting procedure, a con- 
tingency table can be standardized to have uniform mar- 
gins for both rows and columns in order to examine the 
association or interaction of the table (Fienberg, 1971). 
The classification results were summarized as a confu- 
sion matrix for each classifier. Individual entries of the 
confusion matrix were divided by the table total, and the 
result produced a contingency table. The contingency 
table was normalized with the iterative proportional 
fitting procedure. The procedure made the row and 
column margins consecutively equal one. A standard 
function from SAS software (SAS Institute, 1988a) was 
used to implement the procedure on contingency tables. 
Before implementing the iterative proportional fitting 
procedure, we eliminated zero counts in a contingency 
table using the method of smoothing with pseudo-counts 
(Fienberg and Holland, 1970). 
pre 
cle 
cle 
frc 
di 
an 
tra 
pr 
cy 
Ce 
we 
ne 
WC
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.