3. Istanbul 2004
m on a number
nted for an
sing the
3.1. Aerial images segmentation
In our experiment , 12 color aerial images are used. The
photography scale of color aerial images is 1:8000. The
principle focal is 152.987mm. The scanning resolution is 96
um. The photo size is 23cm x23cm. An example of aerial
images segmentation is shown in Fig.2. The piece of color
aerial image, shown in Fig.2(A) contains density trees, sparse
trees, houses, roads, grass and ground. Fig.2(B) shows
corresponding disparity image that is grey image including 256
levels. It can be seen that high density trees and some houses
reveal light white and grey, while the low sparse trees ,grass
and ground are dark grey and black. This is due to the higher
objects such as high density trees and some houses versus grass
and ground have bigger grey values. According to the disparity
image, high and low objects are recognized and the results
shows in Fig.2(C), where white and black areas express
respectively high and low objects. From Fig.2(C) can be seen
that in the high regions, there are mainly density trees high
sparse trees and a few houses, and non-trees objects such as
some lower houses ,grass, road, ground and lower sparse trees
are classified into low region. Finally, on the basis of
preliminary results, the trees are refined by Fuzzy C-Mean with
the texture and color features listed in Table 1 in the high and
low areas and Fig. 2 (D) shows the final segmented results in
(A) Original color image
(B) The result segmented
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004
pixels are correctly classified. At last, in this paper, an
architecture of the observed points grouped based on above
segmented results is proposed in order to improve the quality of
aerial triangulation in forest —covered regions.
3.2. The automatic aerial triangulation of the observed
points grouped
The comparison of results of the automatic aerial
triangulation that the observed points on trees and non-trees
are divided into different groups and automatic aerial
triangulation using united points is the main task of the
subsection. Generally, the matching accuracy and reliability of
the observed points on trees are lower than of the observed
points on non-trees. If the observed points on trees and on non-
trees are grouped before the adjustment, the accuracy of
automatic aerial triangulation would be improved in the forest-
covered regions. To be divided automatically the observed
points, above results of aerial images segmentation are used .
Fig.3 shows the procedure grouped observed points. Fig. 3(A)
and Fig. 3(B) is respectively an original color aerial image and
segmented results. The result of the observed points grouped is
shown on Fig.3(C),where red and blue cross express separately
the observed points on trees and non-trees. In this paper, the
comparison is done on the basis of two blocks of aerial
images. Description of the blocks is shown in Table 1
- A ES
(C) Observed points grouped
Figure 3. The procedure of observed points grouped
which trees and non-trees are expressed as white and block
areas. In compare with the preceding results shown Fig. 2(C),
some lower spare trees belonged to non-trees in the preliminary
segmentation are classified to the trees region. In particular,
the number of pixels classified as non-trees due to the
difference height between trees is small and a large number of
and the results of two blocks adjustment in two different
conditions are listed in Table 2.
Table 2 shows the accuracy values of two blocks
adjustment in the two different cases: the observed points
grouped and ungrouped. The first column corresponds to
the nine common accuracy measures of aerial adjustment,
Table 1. Two aerial block data
Description Photo camera No.of Focal length control Check Pixel
P scale photos mm points points size um
Block No.1 1:8000 RC30 12 152.987 16 10 96
87.966 124 70 25
Block No.2 1:40000 RC10 48