Zhongliang Fu
Weight W"; may be computed with d-rule learning algorithm:
m m m m-1
W"zW?"-th-dP?-y" (14)
h in the equation is study speed. Commonly -0.01« h <0.3,d ; is an epoch error.
For output layer,
d'y") (15)
T, y", is respectively the desired and actual output of ith neurone.
For hidden layer,
SHEAR ES APR) YHA (16)
j
To avoid that correction value waves, an addition value is added to each correction value.
DW" (n+1)=DW;/ (n+1) +a :DW;" (n) (17)
n in the equation (17) is iterative numbers.a is a positive impulse coefficient, a —0.9.
The equation (16) shows, when y”=1 or 0, even if T, # y",A;" =0 make DW; equal to 0. For avoiding the case,
when y=0 or 1, set y,=0.1 or 0.9.
For improving the identification reliability, Reject identification is introduced. The rule is:
a. All of network output value is less than V,=0.75;
b. Hypo-maximum output value is great than a threshold V,=0.4;
c. The difference between maximum and hypo-maximum is less than threshold V;=0.35.
Output result of first sub-net in a compound network is less different with ideal pattern. But this doesn't affect
right identification to pattern. It is for second sub-net may also tolerate error.
6. EXPERIMENT AND CONCLUSION
To make sure the validity and reliability of all of above algorithms, 28 images are acquired and processed. Fig. 4
shows an original image and its process results.
S
qp ste
EZ (=)
eR
al Er ES
4 886500
ve
Fig.4a Original image Fig.4b Binary image
A RRESGDO
Fig.4c Horizontal project Fig 4d Locating character group
310 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000.