Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B4-1)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B4. Beijing 2008 
317 
Point clouds in each window are automatically classified as 
trees, for certain combinations of point density in the four 
subregions. These combinations generally request that at least 
one upper subregion has high density, while at least one lower 
subregion should have lower density. In addition, we request 
that the difference Zmax - Zmin in each window exceeds a 
threshold. For the upper subregions a threshold T for the point 
density is used, where T is defined as: 
(1) 
UP and up are the mean value and standard deviation of the 
number of points in each of the upper subregions in all the 
windows. 
Figure 10. Extracted trees (black points) overlaid on CIR 
orthoimage. 
In the case of Zurich Airport data, the density threshold T is 6 
points / 2.5m by 2.5m. The extracted tree class has been 
compared with the tree class extracted from NDVI and nDSM. 
73% of tree points were correctly classified, while 8% were not 
detected. The density of point cloud directly affects the quality 
of the result. As it can be seen in Figure 10, visible errors in the 
results are small objects as vehicles and aircrafts. In addition, 
some tree areas could not be extracted because of the low point 
density in the whole dataset. As mentioned above, the points in 
the raw DSM not present in DTM-AV describe buildings, 
vehicles and high or dense vegetation. After extracting the trees 
using point density analysis, buildings are obtained by 
subtraction of the tree layer from the DSM points, 
corresponding to voids and low density, and filtering of small 
objects (Figure 11). The accuracy analysis shows that 92% of 
building pixels are correctly classified, while 17% of buildings 
could not be detected. 
Figure 11. Extracted building points after elimination of tree 
points. 
5. ANALYSIS OF RESULTS 
The accuracy results of the four methods described in Section 4 
are summarized in Table 2. Method 4, based only on Lidar data, 
performs best in terms of correctness, but is the worst in terms 
of completeness. It does not detect all buildings, but those 
detected are correct. On the other hand, Method 1, again based 
on Lidar data, but with NDVI contribution, can extract the 
largest part of buildings but other objects are included, resulting 
in the worst correctness value. The other two methods have 
basically the same performance, lying in the middle between 
Method 1 and 4 results. It should be noted that results using 
Lidar data strongly depend on average point density, but also 
number of echoes that are registered per pulse and whether 
Lidar data acquisition was with leaves on or off. 
Due to time restrictions, only a first simple fusion of the results 
has been attempted. By union of the four building detection 
results, the omission rate decreases (8%) but also correctness 
too (81%), while the intersection of all results gives the best 
correctness (96%). The correctness of each method could be 
improved by developing an automatic detection of objects like 
aircrafts, which are classified as buildings in all methods. 
Taking into account the advantages and limitations of each 
method, currently we can not recommend a single solution for 
building extraction. By intersecting the results from method 4, 
based on Lidar data analysis, and method 2, based on 
supervised classification, the best correctness rate is achieved, 
but the completeness is poor. The other building layer 
combinations in Table 2 led to worse results. 
Method 1 
Method 2 
Method 3 
Method 4 
Iu2u3u4 
In2n3n4 
lu4 
ln4 
2u4 
2n4 
nDSM+NDVI 
Class.+nDSM 
Voids+ 
NDVI 
Lidar 
Correctness (%) 
76 
86 
87 
92 
81 
96 
83 
96 
83 
95 
Omission error (%) 
10 
13 
13 
17 
8 
29 
10 
29 
9 
27 
Table 2. Results of building extraction using four methods and combinations thereof. The best results are shown in bold. 
6. CONCLUSIONS 
In this paper, different methods for object extraction (mainly 
buildings) in Lidar data and aerial images have been presented. 
In each method, the basic idea was to get first preliminary 
results and improve them later using the other available data. 
The methods have been tested on a dataset located at Zurich 
Airport, Switzerland, containing aerial colour and infrared 
images, Lidar DTM and DSM point clouds and regular grids 
and vector data for accuracy assessment. The results showed 
that correctness values up to 92% can be achieved using Lidar 
data only, while the highest completeness is obtained by the 
combination of image and Lidar data. 
Future work will include the improvement of building 
extraction from aerial images and Lidar data. The algorithms 
will be tested also on other airport locations. However, the 
main focus will be on a better fusion of the individual results, 
use of image edges for better building delineation and more 
detailed building modelling.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.