bul 2004
V. A
igital
a, IS
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B2.
800 —
700r * 1
500}
400} |
300 Truck |
200} "s MPV
100; : Passenger
Istanbul 2004
Figure 4. Vehicle close-up from LiDAR with parameters. tn tad à ;
%o 0 50 100 150 200
In subsequent tests, additional parameters were used, such as
average intensity values of the four equal segments, or
derived parameters, such as vehicle footprint size or vehicle
volume. Since all these parameters were mostly generic, no
physical modeling was used. The Principal Component
Analysis (PCA) was selected as an obvious choice to identify
) significant correlation among the parameters describing the
data, and ultimately, to use for the minimum and sufficient
b)
Figure 5. Vehicle distribution in the classification space
defined by the 6 (a) and 4 (b) parameter spaces.
The classification performance was evaluated by using three
widely used techniques (Toth et. al, 2003a). The first
method, a rule-based classifier, contains decision rules
subset of parameters. There were two training sets used for
derived from the PCA transformed features. As depicted in
Figure 4, a clear separation, in other words, a clustering of
Orne PCA, one from Dayton, OH containing 72 vehicles and one samples wich identical Label canbe Sasi le beth cel the
rent from Toronto with 50 vehicles. Table | summarizes the PCA ADS T " ene ae san SS a b o d
; of ine ela isole n s groups by using straight lines. The second method was a
results for various parameter selections. Additional details fundamental statistical technique: the minimum distance
data can be found in (Toth and Brzezinska, 2004). ay i n Rn
. AK method. This classifier is based on a class description
das Parameters "EXGTUTUT Fa romponcal involving the class centers, which are calculated by
thé H1.H2.H3.H3 4 74.87 16.15 averaging feature components of each class. Finally, the third
4K NW LHLHOH,H | 6 96.58 212 method in the vehicle recognition investigation was based on
d'on W.L.HI. H2, H3. H3, |. 1 65.48 21.94 an artificial neural network classifier. A 3-layer feed-forward
ted. 11,12, 13, 14 0 (back-propagation) neural network structure was
W, L. A (W*L), 4 99.43 0.49 implemented in our tests. The training method was the
ata VOW*L^H) Levenberg-Marquard algorithm (Demuth, 1998), the
Table 1. PCA performance for various parameter sets. maximal number of training steps (epochs) was 70, and the
data required error goal value was 0.1. The network error was
The The vehicles of the two training sets were grouped into three calculated by the mean square error (MSE) method. The
ially categories: cars, MUVs (Multipurpose Utility Vehicles) and three studied vehicle classification techniques were tested on
by trucks. Using the two most significant eigenvalues as a the first training data set of Ohio (1), on the data set
road classification space, the training set can be visualized as containing vehicles from Ohio and Michigan (2), and on a
Care shown in Figure 5. | combined dataset, including the Ontario data (3), provided by
step. Optech. The first test (in-sample test) was only an internal
*ked check of the algorithms. Table 2 shows a performance
een $5 comparison of the three techniques. Additional results can be
nple 5 found in (Toth et. a/., 2003a).
eled 43
ting 3 Trucks Data set Minimum Neural
the à Right side (total number of Rule-based distance network
vehicles)
ife SS Muttiadrposs vehicle Left side (vehicles Ohio (72) ; 0 (0%) 8 11%) > 12106)
Be: LIDAR Corson Ohio + Michigan (87) | 2(2.3%) | 12(13.8%) | 8(9.2%)
r of N move in the same Ohio + Michigan + 2(2%) 17 (16.7%) | 16 (15.7%)
the 25 M direction) Ontario (102)
The 2 N Table 2. The comparison of the three classification
ds a x = techniques: vehicle count of misclassification errors.
luce 5 3 N
pare ; x 5.2 Vehicle Extraction and Tracking from Helicopter
Ol. Passenger car x Imagery
ess, 936 5 “Tho 15 20 30 35
be In cooperation with the University of Arizona (UoA), an
ers, a) experimental sensor configuration based on a 4K by 4K
ight digital camera with a 50 mm focal distance and 15-um pixel
qual size, 60° mm’ imaging area, a video, and a small resolution
n of digital frame camera assembly was flown to acquire images
over a busy intersection north of the UoA campus area (see
details in Grejner-Brzezinska and Toth, 2003; Grejner-