ual
CO-
ue-
ind
ffe-
sue
Sa
er-
sen
> is
rk,
‘he
ing
et-
nt
al
ly
linear) of the activity z; in that neuron.
The dot product of the the signal vector S times the
stored weight vector w; in (1) is a measure of the si-
milarity of these two vectors. This means that the
McCulloch-Pitts neuron receives the incoming pat-
tern S, compares it with the pattern w; stored in me-
mory, and reacts to their similarity. Of course, there
are also some other types of measurement of similarity
between patterns. Probably the best known measure
of similarity is the Euclidean distance, given by
| [245 — wij)*. (2)
This generalizes to the Minkowski metric, given by
[rus = 21 (3)
i
In fuzzy logic, two scalars’ similarity is given by
maz[min(S;, w;;), max (1 — S;,1 — wi;)], (4)
where S; and w;; should be drawn from interval bet-
ween 0 and 1 inclusive.
4.2 Learning
Using some McCulloch-Pitts neurons, one can build
different types of neural networks for different pur-
poses. Most work in neural networks involves lear-
ning. So the goal of most neural network models is
to learn relationships between stimuli, though many
other things can also be learned, such as the structure
of the network, the activation functions, even the lear-
ning rules themselves. For the task of feature grouping
during feature extraction process, the main goal is to
design a learning system which compute a classifica-
tion, where a large set of input patterns is mapped
onto a relatively small set of output patterns, which
represent sets into which the input patterns are clas-
sified.
When developing a neural network to perform a par-
ticular pattern-classification operation, we typically
proceed by gathering a set of exemplars, or training
patterns, then using these exemplars to train the sy-
stem by adjusting weights on the basis of the dif-
ference between the values of output units and the
desired pattern. This kind of learning is referred to
supervised learning.
Another important kind of learning is so called un-
supervised learning which occurs without a teacher.
Such a learning algorithm learns to classify the input
867
Figure 4: The competitive learning architecture
sets without being told anything. It does this cluste-
ring solely on the basis of the intrinsic statistical pro-
perties of the set of inputs. This property is just that
what we want to perform a grouping operation before
features can be described.
4.3 Competitive Learning
A mechanism of fundamental importance in unsu-
pervised learning is described by the phrase competi-
tive learning, which was developed in the early 1970s
by contributions of MALSBURG, GROSSBERG, AMARI,
and KOHONEN (cf. CARPENTER and GROSSBERG,
1988). Its main principle can be shown by using the
following simpler mathematical formalism (cf. Figure
4).
There is a two-layer system of M input neurons
(Fi) and N output neurons (F5). These two layer
of neurons are fully connected by using the weight
Uic 1, M; j = 1,., N. Now, let I; denote the
input to the if^ node v; of F1, i — 1,..., M, and let
= Ii e)
Li =
be the normalized activity of v; in response to the
input pattern I — (7,15, ..., Ij). The output signal
5; of vj, as mentioned earlier, is usually a nonlinear
function of z; and it, for simplicity, can be assumed
to be equal z;.
Now, each neuron v; of F5, j — 1,..., N, compares the
incoming pattern S — (51,55,..., Sy) with the sto-
red weight vector w; — (wij, wy}, ...,war;) by using
a measure of similarity mentioned above and gives a
activity z; like (1). The neuron v; with the maxi-
mum activity z, — max(z;) is selected (winner take
all). This is the neuron whose weight vector is most
similar to the input vector. This weight vector is then