The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008
553
produced reference images applied in accuracy assessment
procedure.
Figure 6. Pan-sharpened Quick Bird image of Bushehr harbor
and its manually produced reference image
Figure 7. Pan-sharpened IKONOS image of Kish Island and its
manually produced reference image
In the following sections, the practical results of different step
of road extraction are presented accentuating on practical
aspects of the implementation.
3.1 Implementation Results of Road Detection
Road detection was performed using an artificial neural
network consisting of 7 neuron in its input layer in charge of
receiving 3 spectral values (R, G, B) and 4 textural parameters
as explained in section 2.1. The hidden layer was made up of 10
neurons and the output layer, having only one neuron, was
designed to show the response of neural network.
Sample #1
Sample #2
RCC
BCC
RMSE
RCC
BCC
RMSE
No Texture
Parameter
82.36
93.53
0.172
77.05
90.86
0.259
Using
Texture
93.54
96.31
0.106
80.77
96.15
0.196
Parameters
Table 1. Accuracy assessment of road detection procedure
About 500 road and 500 background pixels were selected from
each input image to be used in neural network training stage.
An adaptive strategy was applied for learning rate and
momentum parameters to stabilize the training stage of the
neural network.
In order to evaluate the performance of the road detection
procedure, three quality control parameters, RCC, BCC and
RMSE were used.
RCC and BCC, stand for “Road/Background Detection
Correctness Coefficient” respectively, are the average of correct
neural network response for road and background detection by
comparison the manually produced reference image (Figures 6-
b and 7b). Regarding the difference between the neural network
response and its true expected response (0 for background and 1
for road pixels) as the error values, the Root Mean Square Error
(RMSE) can be computed as the third accuracy assessment
parameter.
Figure 8 show the neural network road detection results for
input pan-sharpened images of Figures 6a and 7a. These gray
scale images are produced by multiplying the normalized neural
network output by 255.
Figure 8. Neural network road detection results
In Figure 8, the left side images (8a and 8d) show the obtained
result of simple neural network where no texture parameter is
used. Right side images of Figure 8 (8b and 8d) depicts the
output of the proposed neural network structure where texture
parameters of the preliminary road raster maps ( Figures 8a and
8c) are used beside spectral information for neural network
input parameters set generation.
Table 1 show the obtained accuracy assessment parameters for
both cases where the input source image of Figures 6a and 7a
are called sample# 1 and Sample#2.
The presented accuracy assessment parameters in Table 1 show
that both road and background detection ability of the textural
improved neural network are improved and thus the efficiency
of the proposed road detection methodology in this research is
approved.
3.2 Implementation Results of Road Vectorization
The obtained results of improved neural networks (Figures 8b
and 8d) were converted to road raster map putting a threshold
on the grey scale values. The obtained road raster maps were
used in the road vectorization process described in section 2.2.
At the first attempt, genetically guided road key point
determination was performed on a simulated road raster map.
Although the obtained result was acceptable, the computation
time, even for the small size simulated road raster map, was