Zhongliang Fu
by n x n 2D array. Every neurone is connected with adjacent neurone by weight. As Fig 3.
00 00 0000 (1)
© 00 00000
Ru 22929299888
00 00 0 0 00
(x.y)
00 00 0000
00 00 0, 0, 00
000 00 000
0 00 0, 0. 0.0 kl) (mn) (op
Fig.3a 2D locally connection Fig.3b 2D locally connective three order
neural network
Suppose the input of neurone (i,j) is X;, the output is Y;, then:
Nj fl S Wii .kl.mn.op : Xy ; X mn : X op] (10)
kl
mnc Ri
op
Where, R;; is a neighborhood n;x n, of neurone (i,j), usually the size is taken as 5 x5. f].] is the output function
of neurone. Here f/x]=sgn(x).
Wi; u mn.op (kl, mn,op € R;;) is the self-adaptive weight of network. Suppose the weight is relative with the distance
of relative neurones, then equal weight class is constructed as follows:
(W i. mn op = W ii.d, a, .d,.d, di =n = k,d, =n- l,d3 =0- k,d4 =P l,d, ,d2,d3,d4 > 0}
W;;kimn.op May be found with correct error learning algorithm. Namely:
Wiin.4,2,4, = Wiad,a.a, Th: (Tj "Nj )'( > X ki X kd, d, X k+d3,l+d, ) (11)
kle Rij
In the equation (11), 7Z;, y; is respectively desired and actual output of the neurone (i,j), h 1s learning rate.
In equation (11), the dimension of weight wWw..
id dd od increases than ones in 1-D connection. But because R;; is a
20304
small neighborhood, d,,d,,d;,d, which only change in R;, is very small. So the size of network and computing
quantity don' t remarkably increase.
5.2 Classing Network
Substantively, classing network is locally connected BP network. It implements different associative memory.
First layer of network is an input layer. Its input is the output of first sub-net. Second layer is a hidden layer. It
locally connects with the first layer. Third layer is an output layer. The connective mode between output layer
and hidden layer is full. Every neurone in output layer is corresponding to a preconcerted class code of pattern.
If the output value of ith neurone in mth layer, then
y? = PALA yt! +q”] (12)
j
In above equation, y," 'is the output value of jth neurone in m-/th layer. W"; is the connective weight between
y", and y", q is bias, f[.] is a Sigmoid function. It yields following equation:
y 9;
]
"T€
(13)
f(x)=
-X
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 309