Retrodigitalisierung Logo Full screen
  • First image
  • Previous image
  • Next image
  • Last image
  • Show double pages
Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

CMRT09

Access restriction

There is no access restriction for this record.

Copyright

CC BY: Attribution 4.0 International. You can find more information here.

Bibliographic data

fullscreen: CMRT09

Monograph

Persistent identifier:
856955019
Author:
Stilla, Uwe
Title:
CMRT09
Sub title:
object extraction for 3D city models, road databases, and traffic monitoring ; concepts, algorithms and evaluation ; Paris, France, September 3 - 4, 2009 ; [joint conference of ISPRS working groups III/4 and III/5]
Scope:
X, 234 Seiten
Year of publication:
2009
Place of publication:
Lemmer
Publisher of the original:
GITC
Identifier (digital):
856955019
Illustration:
Illustrationen, Diagramme, Karten
Language:
English
Usage licence:
Attribution 4.0 International (CC BY 4.0)
Publisher of the digital copy:
Technische Informationsbibliothek Hannover
Place of publication of the digital copy:
Hannover
Year of publication of the original:
2016
Document type:
Monograph
Collection:
Earth sciences

Chapter

Title:
DETECTION OF BUILDINGS AT AIRPORT SITES USING IMAGES & LIDAR DATA AND A COMBINATION OF VARIOUS METHODS Demir, N., Poli, D., Baltsavias, E.
Document type:
Monograph
Structure type:
Chapter

Contents

Table of contents

  • CMRT09
  • Cover
  • ColorChart
  • Title page
  • Workshop Committees
  • Program Committee:
  • Preface
  • Contents
  • EFFICIENT ROAD MAPPING VIA INTERACTIVE IMAGE SEGMENTATION O. Barinova, R. Shapovalov, S. Sudakov, A. Velizhev, A. Konushin
  • SURFACE MODELLING FOR ROAD NETWORKS USING MULTI-SOURCE GEODATA Chao-Yuan Lo, Liang-Chien Chen, Chieh-Tsung Chen, and Jia-Xun Chen
  • AUTOMATIC EXTRACTION OF URBAN OBJECTS FROM MULTI-SOURCE AERIAL DATA Adriano Mancini, Emanuele Frontoni and Primo Zingaretti
  • ROAD ROUNDABOUT EXTRACTION FROM VERY HIGH RESOLUTION AERIAL IMAGERY M. Ravenbakhsh, C. S. Fraser
  • ASSESSING THE IMPACT OF DIGITAL SURFACE MODELS ON ROAD EXTRACTION IN SUBURBAN AREAS BY REGION-BASED ROAD SUBGRAPH EXTRACTION Anne Grote, Franz Rottensteiner
  • VEHICLE ACTIVITY INDICATION FROM AIRBORNE LIDAR DATA OF URBAN AREAS BY BINARY SHAPE CLASSIFICATION OF POINT SETS W. Yaoa, S. Hinz, U. Stilla
  • TRAJECTORY-BASED SCENE DESCRIPTION AND CLASSIFICATION BY ANALYTICAL FUNCTIONS D. Pfeiffer, R. Reulke
  • 3D BUILDING RECONSTRUCTION FROM LIDAR BASED ON A CELL DECOMPOSITION APPROACH Martin Kada, Laurence McKinle
  • A SEMI-AUTOMATIC APPROACH TO OBJECT EXTRACTION FROM A COMBINATION OF IMAGE AND LASER DATA S. A. Mumtaz, K. Mooney
  • COMPLEX SCENE ANALYSIS IN URBAN AREAS BASED ON AN ENSEMBLE CLUSTERING METHOD APPLIED ON LIDAR DATA P. Ramzi, F. Samadzadegan
  • EXTRACTING BUILDING FOOTPRINTS FROM 3D POINT CLOUDS USING TERRESTRIAL LASER SCANNING AT STREET LEVEL Karim Hammoudi, Fadi Dornaika and Nicolas Paparoditis
  • DETECTION OF BUILDINGS AT AIRPORT SITES USING IMAGES & LIDAR DATA AND A COMBINATION OF VARIOUS METHODS Demir, N., Poli, D., Baltsavias, E.
  • DENSE MATCHING IN HIGH RESOLUTION OBLIQUE AIRBORNE IMAGES M. Gerke
  • COMPARISON OF METHODS FOR AUTOMATED BUILDING EXTRACTION FROM HIGH RESOLUTION IMAGE DATA G. Vozikis
  • SEMI-AUTOMATIC CITY MODEL EXTRACTION FROM TRI-STEREOSCOPIC VHR SATELLITE IMAGERY F. Tack, R. Goossens, G. Buyuksalih
  • AUTOMATED SELECTION OF TERRESTRIAL IMAGES FROM SEQUENCES FOR THE TEXTURE MAPPING OF 3D CITY MODELS Sébastien Bénitez and Caroline Baillard
  • CLASSIFICATION SYSTEM OF GIS-OBJECTS USING MULTI-SENSORIAL IMAGERY FOR NEAR-REALTIME DISASTER MANAGEMENT Daniel Frey and Matthias Butenuth
  • AN APPROACH FOR NAVIGATION IN 3D MODELS ON MOBILE DEVICES Wen Jiang, Wu Yuguo, Wang Fan
  • GRAPH-BASED URBAN OBJECT MODEL PROCESSING Kerstin Falkowski and Jürgen Ebert
  • A PROOF OF CONCEPT OF ITERATIVE DSM IMPROVEMENT THROUGH SAR SCENE SIMULATION D. Derauw
  • COMPETING 3D PRIORS FOR OBJECT EXTRACTION IN REMOTE SENSING DATA Konstantinos Karantzalos and Nikos Paragios
  • OBJECT EXTRACTION FROM LIDAR DATA USING AN ARTIFICIAL SWARM BEE COLONY CLUSTERING ALGORITHM S. Saeedi, F. Samadzadegan, N. El-Sheimy
  • BUILDING FOOTPRINT DATABASE IMPROVEMENT FOR 3D RECONSTRUCTION: A DIRECTION AWARE SPLIT AND MERGE APPROACH Bruno Vallet and Marc Pierrot-Deseilligny and Didier Boldo
  • A TEST OF AUTOMATIC BUILDING CHANGE DETECTION APPROACHES Nicolas Champion, Franz Rottensteiner, Leena Matikainen, Xinlian Liang, Juha Hyyppä and Brian P. Olsen
  • CURVELET APPROACH FOR SAR IMAGE DENOISING, STRUCTURE ENHANCEMENT, AND CHANGE DETECTION Andreas Schmitt, Birgit Wessel, Achim Roth
  • RAY TRACING AND SAR-TOMOGRAPHY FOR 3D ANALYSIS OF MICROWAVE SCATTERING AT MAN-MADE OBJECTS S. Auer, X. Zhu, S. Hinz, R. Bamler
  • THEORETICAL ANALYSIS OF BUILDING HEIGHT ESTIMATION USING SPACEBORNE SAR-INTERFEROMETRY FOR RAPID MAPPING APPLICATIONS Stefan Hinz, Sarah Abelen
  • FUSION OF OPTICAL AND INSAR FEATURES FOR BUILDING RECOGNITION IN URBAN AREAS J. D. Wegner, A. Thiele, U. Soergel
  • FAST VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE BURSTS Karsten Kozempel and Ralf Reulke
  • REFINING CORRECTNESS OF VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE SEQUENCES BY MEANS OF VELOCITY AND TRAJECTORY EVALUATION D. Lenhart, S. Hinz
  • UTILIZATION OF 3D CITY MODELS AND AIRBORNE LASER SCANNING FOR TERRAIN-BASED NAVIGATION OF HELICOPTERS AND UAVs M. Hebel, M. Arens, U. Stilla
  • STUDY OF SIFT DESCRIPTORS FOR IMAGE MATCHING BASED LOCALIZATION IN URBAN STREET VIEW CONTEXT David Picard, Matthieu Cord and Eduardo Valle
  • TEXT EXTRACTION FROM STREET LEVEL IMAGES J. Fabrizio, M. Cord, B. Marcotegui
  • CIRCULAR ROAD SIGN EXTRACTION FROM STREET LEVEL IMAGES USING COLOUR, SHAPE AND TEXTURE DATABASE MAPS A. Arlicot, B. Soheilian and N. Paparoditis
  • IMPROVING IMAGE SEGMENTATION USING MULTIPLE VIEW ANALYSIS Martin Drauschke, Ribana Roscher, Thomas Läbe, Wolfgang Förstner
  • REFINING BUILDING FACADE MODELS WITH IMAGES Shi Pu and George Vosselman
  • AN UNSUPERVISED HIERARCHICAL SEGMENTATION OF A FAÇADE BUILDING IMAGE IN ELEMENTARY 2D - MODELS Jean-Pascal Burochin, Olivier Tournaire and Nicolas Paparoditis
  • GRAMMAR SUPPORTED FACADE RECONSTRUCTION FROM MOBILE LIDAR MAPPING Susanne Becker, Norbert Haala
  • Author Index
  • Cover

Full text

In: Stilla U, Rottensteiner F, Paparoditis N (Eds) CMRT09. IAPRS, Vol. XXXVIII, Part 3/W4 — Paris, France, 3-4 September, 2009 
vegetation detection). In addition, new synthetic bands were 
generated from the selected channels: a) 3 images from 
principal component analysis (PCI, PC2, PC3); b) one image 
from NDVI computation using the NIR-R channels and c) one 
saturation image (S) obtained by converting the NIR-R-G 
channels in the IHS (Intensity, Hue, Saturation) colour space. 
The separability of the target classes was analyzed through use 
of plots by mean and standard deviation for each class and 
channel and divergence matrix analysis of all possible 
combinations of the three CIR channels and the additional 
channels, mentioned above. The analysis showed that: 
• G and PC2 have high correlation with other bands 
• NIR-R-PC1 is the best combination based on the plot 
analysis 
• NIR band shows good separability based on the divergence 
analysis 
• PC1-NDVI-S combination shows best separability over 
three-band combinations based on the divergence analysis. 
Therefore, the combination NIR-R-PC1-NDVI -S was selected 
for classification. The maximum likelihood classification 
method was used. As expected from their low values in the 
divergence matrix, grass and trees, airport buildings and 
residential houses, airport corridors and bare ground, airport 
buildings and bare ground could not be separated. Using the 
height information from nDSM, airport ground and bare ground 
and roads were fused into “ground” and airport buildings with 
residential houses into “buildings”, while trees and grass, as 
well as buildings and ground could be separated. The final 
classification is shown in Figure 2. 84% of the building class is 
correctly classified, while All of 109 buildings have been 
detected but not fully, the omission error is 9% . Aircrafts and 
vehicles are again mixed with buildings. 
mr. ss ymwn 
Figure 2. Building detection result from method 2. (Left: airport 
buildings, Right: residential area). 
4.3 Building detection using density of raw Lidar DTM and 
NDVI (Method 3) 
Buildings and other objects, like high or dense trees, vehicles, 
aircrafts, etc. are characterized by null or very low density in 
the DTM point cloud. Using the vegetation class from NDVI 
channel as a mask, the areas covered by trees are eliminated, 
while small objects (aircrafts, vehicles) are eliminated by 
deleting them, if their area is smaller than 25m 2 . Thus, only 
buildings remain (Figure 3). 85% of building class pixels are 
correctly classified, while 108 of 109 buildings have been 
detected but not fully extracted, the omission error is 8% . 
Figure 3. Building detection result from method 3. (Left: airport 
buildings. Right: residential area). 
4.4 Building and tree detection from Lidar data (Method 4) 
As mentioned above, in the raw DSM data the point density is 
generally much higher at trees than at open terrain or buildings. 
On the other hand, tree areas have low horizontal point density 
in the raw DTM data. We start from regions that are voids or 
have low density in the raw DTM (see Method 3). These 
regions represent mainly buildings and trees and are used as 
mask to select the raw DSM points for further analysis. In the 
next step, we used a search window over the raw Lidar DSM 
data with a size of 5 m x 5 m. Neighboring windows have an 
overlap of 50%. The window size has a relation with the 
number of points in the window and the number of the points in 
the search window affects the quality of the detection result. 
The method uses all points in the window and labels them as 
tree if all parameters below have been met. The size of 25m 2 
has been agreed to be enough to extract one single tree. A 
bigger size may result in wrong detection especially in areas 
where the buildings are neighboring with single trees. On the 
other hand, the data has low point density: 1 pt / 2 m 2 , that 
means about 13 pts / 25 nr. A smaller size will contain less 
points and this may not be enough for the detection. 
The points in each search window are projected onto the xz and 
yz planes and divided for each projection in eight equal sub- 
regions using x m j n , x m i d , x max , z mm z m j d ] z m j d 2 2 m j(j3 z max as 
boundary values of sub-regions, with x mid = x min + 2.5m , x max 
— x m id "F 2.5m, z m j d ]— z rn i n +(z max -z m j n )/4, z ivnd 2 ~z n - nn +2 (z max - 
Zmin)/4, z mid3 =z min +3*(z max -z min )/4 and similarly for the yz 
projection. The density in the eight sub-regions is computed. 
The first step is the detection of trees and the second the 
subtraction of tree points from all off-terrain points. The trees 
have been extracted by four different parameters. The 
parameters have been calculated using tree-masked areas of the 
raw Lidar DSM data. The tree mask has been generated by 
Method 2. Then, the calculated parameters (the average of all 
search windows) have been applied to the raw Lidar DSM data 
for detection of trees. 
The first parameter (s) is similarity of surface normal vectors. 
We assume that the tree points would not fit to a plane. With 
selection of three random points in the search window, the 
surface normal vectors have been calculated n (number of 
points in search window) times. Then, all calculated vectors 
have been compared among each other. In case of similar value 
of compared vectors, the similarity value was increased by 
adding 1. In the tree masked points, the parameter (s) has been 
calculated as smaller than 2. The second parameter (vd) is the 
number of the eight sub-regions which contain at least one 
point. The trees have high Lidar point density vertically. Thus, 
at trees more sub-regions contain Lidar points. Using the tree 
mask, we have observed that at least 5 out of the 8 sub-regions 
contain points. Thus, the parameter (vd) has been selected as
	        

Cite and reuse

Cite and reuse

Here you will find download options and citation links to the record and current image.

Monograph

METS MARC XML Dublin Core RIS Mirador ALTO TEI Full text PDF DFG-Viewer OPAC
TOC

Chapter

PDF RIS

Image

PDF ALTO TEI Full text
Download

Image fragment

Link to the viewer page with highlighted frame Link to IIIF image fragment

Citation links

Citation links

Monograph

To quote this record the following variants are available:
Here you can copy a Goobi viewer own URL:

Chapter

To quote this structural element, the following variants are available:
Here you can copy a Goobi viewer own URL:

Image

To quote this image the following variants are available:
Here you can copy a Goobi viewer own URL:

Citation recommendation

Stilla, Uwe. CMRT09. GITC, 2009.
Please check the citation before using it.

Image manipulation tools

Tools not available

Share image region

Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Contact

Have you found an error? Do you have any suggestions for making our service even better or any other questions about this page? Please write to us and we'll make sure we get back to you.

What color is the blue sky?:

I hereby confirm the use of my personal data within the context of the enquiry made.