are showed in figure 1. In practical application, we can choose a
different directional wavelet spatial filter g a according to the
required by equation (11).
are n-dimensional feature vectors formed by the high-
frequency details of different frequency bands (direction) and
the low-frequency information
(a) (b) (c)
(d) (e)
Figure 1.Gaussian directional wavelet extracting the high-
frequency features of 4 directions.
where (a)The standard test image
(b) High-frequency features of 30°
(c) High-frequency features of 60°
(d) High-frequency features of 120°
(e) High-frequency features of 150°
3. SOM IMAGE SEGMENTATION
3.1 Self-organizing map
The model of Fig.3 , introduced by Kohonen , displays the
structure of self-organizing map. A self-organizing image
segmentation based on competitive, cooperative learning and
adaptive adjusting is one of unsupervised classified neural
networks. A self-organizing map transforms an incoming image
of arbitrary dimension into a two-dimension discrete map. A
self-organizing map is characterized by the learning process
which has no use for the prior information about the correct
classes of input patterns. The network classifies them by
statistical characteristics of the input pattern and the output
neurons of the network compete among themselves to be
activated or fired , with the result that only one output neuron ,
or one neuron per group, is on at one time in the learning
process of network. The excited synaptic weight connects input
pattern layer with competition layer and lateral inhibition
connections exist between layers.
Two-dimensional array of postsynaptic neurons .
Figure 2. Kohonen model
2.2 Remote sensing images feature vectors constructing
To construct images feature vectors, select
TC
a = k — (k = 0,1,2 ...,m is a fixed
m
value, 0° < OL < 360°) , By equation (12) according to
wavelet transform we can get the form
x = { A r’ D °» ! a = a„a 2 ,a 3 ,...a n _\ } ]S/SJ
3.1.1 Competitive Process : Let m denote the dimension of
the input space. Let an input pattem(eigenvector) selected from
the input space be denoted by
X = ( Xj, x 2 , x 3
The synaptic weight vector of each neuron in the network has
the same dimension as the input space. Let the synaptic weight
vector of neuron j be denoted by
Wj — j %j25 x J3 > xj m ) , y — (1,2,3 • • •, /)
Where 1 is the total number of neurons in the network. Winning
neuron is mathematically equivalent to minimizing the
Euclidean distance between X and Wj
In the equation, A^ is the low-frequency features information
when the scale is j. Vectors
i(x) = arg min{ jX - Wj || }
(13)
X jj 9 ^2 j+i I ^ CZ l ,CC 2 ,Ct 3 ,..-Ci n _\ } \<j<j
According to Eq.(4), i(x) is the subject of attention because we
want the identity of neuron i. The particular neuron i that
satisfies this condition is called the best-matching or winning
neuron for the input eigenvector X.