The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008
a2 . = compet(n2),si2'” =
\,m = m*
0,m m*
(9)
Where, m* is the serial number of maximal value in vector a2 ; .
3. EXPERIMENTS AND CONCLUSIONS
To verify the validity of author’s method, it is realized by
Visual C#.Net and Matlab program language. Many
experiments are also completed with natural scene images in
Nanjing. These images, with size of 1392^1040 pixels, are
taken by the vehicle-borne mobile photogrammetry system at
different time and in different lighting conditions. The natural
scene image taken by its CCD camera is shown in Figure 4.
Figure 4. The experimental data-natural scene image
Traffic signs detected from image of Figure 4 are shown in
Figure 5(a), corresponding binary inner images of these traffic
signs are shown in Figure 5(b), and recognition results of these
traffic signs are shown in Figure 5(c).
(a) Detected traffic signs
+ X *1
(b) Binary inner images
Acfossroad ^
Cross road A\ Caution P«destrian|
crossing
(c) Recognition results
1 Keep Right
Figure 5. Traffic sign recognition results in image of Figure 4
Experimental result shows that author’s traffic sign recognition
method obtains good effect. The run time of author’s method is
about 0.4 second under the condition of serial compiling. If the
special image processing unit and technique of parallel
compiling are used, the recognition speed will be faster.
Besides the above experimental results, the paper totally selects
221 natural scene images taken by the vehicle-borne system at
different time and different locations to test the proposed
method. There are totally 500 different kinds of traffic signs in
these images. The number of detected signs is 480. To compare
with other recognition methods based on invariant moments, the
paper selects such three kinds of invariant moment as Hu
moment (Hu, 1962), Tchebichef moment (Li, et al., 2006), and
Zemike moment (Fleyeh, et al., 2007). Compared statistical
results are shown in table 2. In this table, H.M, T.M and Z.M
respectively represents Hu moment vector, Tchebichef moment
vector and Zemike moment vector.
""^-feature vector
recogniton residí
Author’s
vector
H.M
T.M
Z.M
yellow
warning
signs(105)
recognized
rate %
105
100
25
23.8
54
51.4
31
29.5
red
prohibition
signs(221)
recognized
rate %
217
98.2
65
29.4
98
44.3
68
30.8
blue
mandatory
signs(154)
recognized
rate %
154
100
69
44.8
103
66.9
86
55.8
Table 2. Compared statistical results between author’s central
projection vector and invariant moments
Some compared experimental results between author’s
recognition method based on central projection vector and other
methods based on invariant moments are shown in Figure 6.
From left to right, Figure 6(a) shows natural scene images,
detected traffic signs and their binary inner images. Figure 6(b)
shows recognition results based on Hu moment. Figure 6(c)
shows recognition results based on Tchebichef moment. Figure
6(d) shows recognition results based on Zemike moment.
Figure 6(e) shows recognition results based on author’s central
projection vector.
Vo***
60
50
50
0
0
■/it;-
0
@)
#
#
•
A
A
A
A
A
A
60
60
m
0
m
60
<¿j);
0
0
0
•
•
»
(a) (b) (c) (d) (e)
Figure 6. Compared experimental results between author’s
recognition method based on central projection vector and other
methods based on invariant moments
From experimental results, we can see that the recognition rate
of proposed traffic sign recognition method is over 98%. This
recognition rate is higher than that of other methods based on
invariant moments, which shows that central projection
transformation can obtain better effect on feature representation
of traffic signs than invariant moments. The shape feature