Full text: CMRT09

In: Stilla U, Rottensteiner F, Paparoditis N (Eds) CMRT09. IAPRS, Vol. XXXVIII, Part 3/W4 — Paris, France, 3-4 September, 2009 
Object 
Completeness 
(%) 
Correctness 
(%) 
Quality 
(%) 
Buildings 
98 
72 
60 
Trees 
99 
83 
80 
Roads 
78 
72 
62 
Table 1: Accuracy Assessment 
Figure 10: Extracted Objects & Orthophoto 
Figure 11: 3D Model (County Sligo) 
5. DISCUSSION 
The classification results, indicated in Table 1, are the result of 
automated processes, depending on the choice of appropriate 
parameters and thresholds. 
Building extraction is the first step in the classification process 
and is therefore important for the extraction of further objects. 
A completeness value of 98% implies that the adopted strategy 
has been successful in the identification of these objects. 
However correctness and quality values are significantly less 
than completeness due to the influence of FP values. FP values 
in buildings arise from large trucks or industrial installations 
incorrectly identified as buildings. However, FN values (i.e. 
missed buildings) mostly occurred for small buildings less than 
50 sq. m. 
In a subsequent manual step, these small buildings were 
individually identified and included in the final building layer. 
The identification of all buildings in the area in this way 
allowed the extracted vegetation data to be improved, which 
later helped in the extraction of roads. Commercial or 
residential buildings, having glass roofs or green colour were 
missed in the NDVI layer but they existed in the NDSM and 
were added to the building layer manually. Many small sheds 
were identified in the backyards of houses which are not part of 
the buildings, which significantly reduced the correctness 
value. 
Vegetation was extracted by subtracting the building layer 
from the NDSM. Very small buildings which appeared in the 
vegetation layer were identified and manually added to the 
building layer. Continuing research is targeted at reducing the 
dependence on such manual steps. 
Multiple reflections, size and compactness were used to 
separate single trees from groves. However, the LiDAR sensor 
can efficiently differentiate between multiple reflections only 
where their height differences are significant. 
Roads appear to be the most difficult objects to extract. They 
are part of the DTM and have spectral reflectance, which varies 
a lot in a single image. Setting a NDVI threshold helps identify 
the areas where there is vegetation or not. Reflections from 
barren land or walking trails in the fields also have very low 
NDVI values. Roads which are not covered by building 
shadows or trees are detected successfully. Road markings of 
different colours also affect the extraction process. Roads 
connecting houses to the road are of different materials and 
need to be classified separately. 
6. CONCLUSION 
The accuracy of the generated orthophoto is critical for any 
classification technique using LiDAR and aerial images. Due to 
the nature of the push broom sensor and the configuration of 
the test flight (no overlap along strip and 15% overlap between 
strips) there is no possibility to combat limitations in the Red, 
and NIR orthoimages. Occluded areas and ghosting of building 
roofs (in the across flight direction) cannot be corrected 
adequately and the building roof structure is completely 
damaged in the areas close to strip edges. This is a major 
disadvantage in the identification and modelling of building 
roof structures. Ground control points, where available, should 
be used for the verification of the registration of the LiDAR 
point cloud and aerial images. In this approach we relied 
completely on orientation from GPS\INS data but for future 
research ground control points will be acquired and the 
accuracy of the image registration will be measured.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.