117
The i-th input vector with d-dimension is given to all
input neurons. The j-th input neuron has the parameter.
This parameter is given by the mean vector and the
cocovariance matrix, where the cocovariance matrix is the
matrix whose kl-th element is
2 (xj-md Z,'0c f - m,)>
to the Lyapunov function.
3. REPRODUCTIVE CRITERIA OF NETWORK
dE(w,</>) = ^ dE(w,</>) d Wj < Q
dt d\V, dt
PARAMETER
given by the inverse matrix. The j-th input neuron outputs
for the input vector, where the subscript denotes the
transpose of the matrix. The input neuron does such
output is called the radial basis function. The output is
propagated to the output neurons through the synaptic
weight. These are added up in the output neurons and
We further extend CRBFN for adapting to the change of
network environment. For this purpose, reproduction of
parameter is considered. The update rule of parameter is
given by
5 F ,im)
d m m
is outputted. The function approximation by neural
network is regarded as that the nonlinear function is
approximated by the output of the network. For this
purpose, the
approximation by the RBFN is realized by decreasing the
total of squared error function
E(w, (p)m~ z E( Xi , w, </>)
where
E (x, w ’</>) =
is the squared error function related with the each output
neuron. That is, RBFN must get the synaptic weights,
parameters of the j-th radial basis function by learning.
A *m l = A m J
instead of equation for CRBFN, where the mean vector
depending on each input vector. If the initial value satisfy
then the update rule for the parameter of RCRBFN is just
same to the update rule for the parameter of CRBFN. It is
found that the parameter which is given by
Z?0t ’ ntjvfoc, ~ m At ) Wx,)' s( *Xi ’ ntm)} = 0
The teaching signal can be detected as the convergence of
the parameter.
The algorithm of RCRBFN
We have already proposed CRBFN, which is superior to
RBFN. The learning algorithm of CRBFN is as follows;
A
A
dE(w,(f))
w, = - £ . .
k dE(w,tf>)
m=- £ —^—
d m.
where the coefficient takes 1 or -1 depending on the sign
STEP 1
The synaptic weight is updated by equation, the
parameters by equation and equation.
STEP 2
If the total of squared error function then stop
the learning, otherwise go to next step.
STEP 3
For all radial basis function, the parameter is
updated by equation when the coefficient
increase from 0 gradually.
A wj = £ (a j M - £ y v M /u t Wk) Wj
of the synaptic weight is given by
We showed that the squared error function is equivalent
dE(w,</>)
C7,
= -8
cr,
STEP 4
If the value satisfying the fixed point increases
by bifurcation, the j-th radial basis function is
reproduced as the p-th radial basis function.
Then the synaptic weight, the parameters take
over ones of the j-th radial basis function, the
parameter is given by the added point newly by
bifurcation. Return to step 1.