RAL
ctral
to discover the
Is have shown
led for image
an system. In
hat this model
been used for
n Landsat TM
to pattern
'al networks
ype of these
property, is
competitive
eral features
atmosphere
be regarded
n the image
ely difficult
st per-pixel
yeters have
information
tant tool for
membership
t-dependent
lempts have
ore attuned
ng. These
lo as fuzzy
"ning neural
s will be
FOR
‚RS
'omputation
application
nition and
capable of
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004
tolerating the noise, distortion and incompleteness of data taken
from the practical applications. Researchers have developed
several different paradigms of ANNs [4,5]. These paradigms
are capable of detecting various features represented in input
signals. An ANN is usually composed of many nonlinear
computational elements. These computational elements operate
in parallel to simulate the function of human brain. Hence, an
ANN is characterized by the topology, activation function, and
learning rules. The topology is the architecture of how neurons
are connected, the activation function is the characteristics of
each neuron, and the learning rule is the strategy for learning
[4,5]. ANN is also well suited for parallel implementations
because of the simplicity and repetition of the processing
elements.
2.1 Unsupervised Models
One type of these networks, which posses the self-organizing
property, is called competitive learning networks. Three
different competitive learning networks, the simple competitive
learning network (SCL), Kohonen's self-organizing feature map
(KSFM) and the frequency-sensitive competitive learning
(FSCL) network were used as unsupervised training methods in
some recognition systems [7]. Similar to statistical clustering
algorithms, these competitive learning networks are able to find
the natural groupings from the training data set. The topology
of the Kohonen self-organizing feature map is represented as a
2-Dimensional, one-layered output neural net. Each input node
is connected to each output node. The dimension of the training
patterns determines the number of input nodes. Unlike the
output nodes in the Kohonen's feature map, there is no
particular geometrical relationship between the output nodes in
both the simple competitive learning network and the
frequency-sensitive competitive learning network. During the
process of training, the input patterns are fed into the network
sequentially. Output nodes represent the ‘trained’ classes and
the center of each class is stored in the connection weights
between input and output nodes.
The following algorithm outlines the operation of the simple
competitive learning network as applied to unsupervised
training [8]; let L denote the dimension of the input vectors,
which for us is the number of spectral bands. We assume that a
2-D (N x N) output layer is defined for the algorithm, where N
is chosen so that the expected number of the classes is less than
or equal to NZ,
Step 1: Initialize weights wi;(t) (i= 1, …, L and j= 1, .., NX N)
to small random values.
Steps 2 - 5 are repeated for each pixel in the
training data set for each iteration.
Step 2: Present an input pixel X(t) ^ (x1, ..., X ) at time t.
Step 3: Compute the distance di between the x; and each output
node using
dj = Ziel, L (x; - wis) where i, j, L, Wij and
X; are similarly defined as in steps 1 and 2.
Step 4: Select an output node j which has minimum distance
(Le. the winning node).
105
Step 5: Update weights of the winning node j using
wii(t*l) = Wij(D "NS - wig (0). i=1. ..L
and 1<j<N x N, where n(t) is a monotonically slowly
decreasing function of t and its value is between 0 and 1.
Step 6: Select a subset of these N2 output nodes as spectral
classes.
Competitive learning provides a way to discover the salient
general features which can be used to classify a set of patterns.
However, there are many problems associated with competitive
learning neural networks in the application of remotely sensed
data. Among them are: 1) underutilization of some neurons [5],
2) the learning algorithm is very sensitive to the learning rate, rj
(t) in remotely sensed data analysis, and 3) the number of
output nodes in the network must be greater than the number of
spectral classes embedded in the training set. Ideally, the
number of output nodes should be dynamically determined in
the training (learning) environment instead of being specified a
priori.
For multispectral classification, the simple competitive learning
networks are extended to include one more layer which will
determine the category to which the input pixel belongs. The
new architecture is shown in Figure 1. Each neuron in the
category decision layer is calculating the difference between the
input pixel value and each category protype, respectively, and a
simple logic box which will determine the minimum value
among those computed differences and hence the corresponding
category.
competitive learning layer categon decision layer
t
category !
Classification
Training
Figure l. A modified competitive learning neural networks
with the extension of a category decision layer.
2.2 Supervised Models
Many adaptive, non-parametric neural-net classifiers have been
proposed for real-world problems. These classifiers show that
they are capable of achieving higher classification accuracy
than conventional pixel-based classifiers [9]; however, few
neural-net classifiers which apply spatial information have been
proposed. The feed-forward multilayer neural network has
been widely used in supervised image classification of remotely
sensed data [10, 11]. Arora and Foody [12] concluded that the
feed-forward multilayer neural networks would produce the
most accurate classification results. A backpropagation Feed-
forward multilayer network as shown in Figure 2 is an
interconnected network in which neurons are arranged in
multilayer and fully connected. There is a value called weight
associated with each connection. These weights are adjusted
using the back-propagation algorithm or its variations, which is