Retrodigitalisierung Logo Full screen
  • First image
  • Previous image
  • Next image
  • Last image
  • Show double pages
Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

CMRT09

Access restriction

There is no access restriction for this record.

Copyright

CC BY: Attribution 4.0 International. You can find more information here.

Bibliographic data

fullscreen: CMRT09

Monograph

Persistent identifier:
856955019
Author:
Stilla, Uwe
Title:
CMRT09
Sub title:
object extraction for 3D city models, road databases, and traffic monitoring ; concepts, algorithms and evaluation ; Paris, France, September 3 - 4, 2009 ; [joint conference of ISPRS working groups III/4 and III/5]
Scope:
X, 234 Seiten
Year of publication:
2009
Place of publication:
Lemmer
Publisher of the original:
GITC
Identifier (digital):
856955019
Illustration:
Illustrationen, Diagramme, Karten
Language:
English
Usage licence:
Attribution 4.0 International (CC BY 4.0)
Publisher of the digital copy:
Technische Informationsbibliothek Hannover
Place of publication of the digital copy:
Hannover
Year of publication of the original:
2016
Document type:
Monograph
Collection:
Earth sciences

Chapter

Title:
A TEST OF AUTOMATIC BUILDING CHANGE DETECTION APPROACHES Nicolas Champion, Franz Rottensteiner, Leena Matikainen, Xinlian Liang, Juha Hyyppä and Brian P. Olsen
Document type:
Monograph
Structure type:
Chapter

Contents

Table of contents

  • CMRT09
  • Cover
  • ColorChart
  • Title page
  • Workshop Committees
  • Program Committee:
  • Preface
  • Contents
  • EFFICIENT ROAD MAPPING VIA INTERACTIVE IMAGE SEGMENTATION O. Barinova, R. Shapovalov, S. Sudakov, A. Velizhev, A. Konushin
  • SURFACE MODELLING FOR ROAD NETWORKS USING MULTI-SOURCE GEODATA Chao-Yuan Lo, Liang-Chien Chen, Chieh-Tsung Chen, and Jia-Xun Chen
  • AUTOMATIC EXTRACTION OF URBAN OBJECTS FROM MULTI-SOURCE AERIAL DATA Adriano Mancini, Emanuele Frontoni and Primo Zingaretti
  • ROAD ROUNDABOUT EXTRACTION FROM VERY HIGH RESOLUTION AERIAL IMAGERY M. Ravenbakhsh, C. S. Fraser
  • ASSESSING THE IMPACT OF DIGITAL SURFACE MODELS ON ROAD EXTRACTION IN SUBURBAN AREAS BY REGION-BASED ROAD SUBGRAPH EXTRACTION Anne Grote, Franz Rottensteiner
  • VEHICLE ACTIVITY INDICATION FROM AIRBORNE LIDAR DATA OF URBAN AREAS BY BINARY SHAPE CLASSIFICATION OF POINT SETS W. Yaoa, S. Hinz, U. Stilla
  • TRAJECTORY-BASED SCENE DESCRIPTION AND CLASSIFICATION BY ANALYTICAL FUNCTIONS D. Pfeiffer, R. Reulke
  • 3D BUILDING RECONSTRUCTION FROM LIDAR BASED ON A CELL DECOMPOSITION APPROACH Martin Kada, Laurence McKinle
  • A SEMI-AUTOMATIC APPROACH TO OBJECT EXTRACTION FROM A COMBINATION OF IMAGE AND LASER DATA S. A. Mumtaz, K. Mooney
  • COMPLEX SCENE ANALYSIS IN URBAN AREAS BASED ON AN ENSEMBLE CLUSTERING METHOD APPLIED ON LIDAR DATA P. Ramzi, F. Samadzadegan
  • EXTRACTING BUILDING FOOTPRINTS FROM 3D POINT CLOUDS USING TERRESTRIAL LASER SCANNING AT STREET LEVEL Karim Hammoudi, Fadi Dornaika and Nicolas Paparoditis
  • DETECTION OF BUILDINGS AT AIRPORT SITES USING IMAGES & LIDAR DATA AND A COMBINATION OF VARIOUS METHODS Demir, N., Poli, D., Baltsavias, E.
  • DENSE MATCHING IN HIGH RESOLUTION OBLIQUE AIRBORNE IMAGES M. Gerke
  • COMPARISON OF METHODS FOR AUTOMATED BUILDING EXTRACTION FROM HIGH RESOLUTION IMAGE DATA G. Vozikis
  • SEMI-AUTOMATIC CITY MODEL EXTRACTION FROM TRI-STEREOSCOPIC VHR SATELLITE IMAGERY F. Tack, R. Goossens, G. Buyuksalih
  • AUTOMATED SELECTION OF TERRESTRIAL IMAGES FROM SEQUENCES FOR THE TEXTURE MAPPING OF 3D CITY MODELS Sébastien Bénitez and Caroline Baillard
  • CLASSIFICATION SYSTEM OF GIS-OBJECTS USING MULTI-SENSORIAL IMAGERY FOR NEAR-REALTIME DISASTER MANAGEMENT Daniel Frey and Matthias Butenuth
  • AN APPROACH FOR NAVIGATION IN 3D MODELS ON MOBILE DEVICES Wen Jiang, Wu Yuguo, Wang Fan
  • GRAPH-BASED URBAN OBJECT MODEL PROCESSING Kerstin Falkowski and Jürgen Ebert
  • A PROOF OF CONCEPT OF ITERATIVE DSM IMPROVEMENT THROUGH SAR SCENE SIMULATION D. Derauw
  • COMPETING 3D PRIORS FOR OBJECT EXTRACTION IN REMOTE SENSING DATA Konstantinos Karantzalos and Nikos Paragios
  • OBJECT EXTRACTION FROM LIDAR DATA USING AN ARTIFICIAL SWARM BEE COLONY CLUSTERING ALGORITHM S. Saeedi, F. Samadzadegan, N. El-Sheimy
  • BUILDING FOOTPRINT DATABASE IMPROVEMENT FOR 3D RECONSTRUCTION: A DIRECTION AWARE SPLIT AND MERGE APPROACH Bruno Vallet and Marc Pierrot-Deseilligny and Didier Boldo
  • A TEST OF AUTOMATIC BUILDING CHANGE DETECTION APPROACHES Nicolas Champion, Franz Rottensteiner, Leena Matikainen, Xinlian Liang, Juha Hyyppä and Brian P. Olsen
  • CURVELET APPROACH FOR SAR IMAGE DENOISING, STRUCTURE ENHANCEMENT, AND CHANGE DETECTION Andreas Schmitt, Birgit Wessel, Achim Roth
  • RAY TRACING AND SAR-TOMOGRAPHY FOR 3D ANALYSIS OF MICROWAVE SCATTERING AT MAN-MADE OBJECTS S. Auer, X. Zhu, S. Hinz, R. Bamler
  • THEORETICAL ANALYSIS OF BUILDING HEIGHT ESTIMATION USING SPACEBORNE SAR-INTERFEROMETRY FOR RAPID MAPPING APPLICATIONS Stefan Hinz, Sarah Abelen
  • FUSION OF OPTICAL AND INSAR FEATURES FOR BUILDING RECOGNITION IN URBAN AREAS J. D. Wegner, A. Thiele, U. Soergel
  • FAST VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE BURSTS Karsten Kozempel and Ralf Reulke
  • REFINING CORRECTNESS OF VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE SEQUENCES BY MEANS OF VELOCITY AND TRAJECTORY EVALUATION D. Lenhart, S. Hinz
  • UTILIZATION OF 3D CITY MODELS AND AIRBORNE LASER SCANNING FOR TERRAIN-BASED NAVIGATION OF HELICOPTERS AND UAVs M. Hebel, M. Arens, U. Stilla
  • STUDY OF SIFT DESCRIPTORS FOR IMAGE MATCHING BASED LOCALIZATION IN URBAN STREET VIEW CONTEXT David Picard, Matthieu Cord and Eduardo Valle
  • TEXT EXTRACTION FROM STREET LEVEL IMAGES J. Fabrizio, M. Cord, B. Marcotegui
  • CIRCULAR ROAD SIGN EXTRACTION FROM STREET LEVEL IMAGES USING COLOUR, SHAPE AND TEXTURE DATABASE MAPS A. Arlicot, B. Soheilian and N. Paparoditis
  • IMPROVING IMAGE SEGMENTATION USING MULTIPLE VIEW ANALYSIS Martin Drauschke, Ribana Roscher, Thomas Läbe, Wolfgang Förstner
  • REFINING BUILDING FACADE MODELS WITH IMAGES Shi Pu and George Vosselman
  • AN UNSUPERVISED HIERARCHICAL SEGMENTATION OF A FAÇADE BUILDING IMAGE IN ELEMENTARY 2D - MODELS Jean-Pascal Burochin, Olivier Tournaire and Nicolas Paparoditis
  • GRAMMAR SUPPORTED FACADE RECONSTRUCTION FROM MOBILE LIDAR MAPPING Susanne Becker, Norbert Haala
  • Author Index
  • Cover

Full text

CMRT09: Object Extraction for 3D City Models, Road Databases and Traffic Monitoring - Concepts, Algorithms, and Evaluation 
Sample Distance (GSD) of all input data is 0.2 m. In Toulouse, 
these images are Pléiades tri-stereoscopic satellite images. The 
GSD of all input data is 0.5 m. Lastly, the DSM used in Lyngby 
was derived from first pulse LIDAR data, and the digital 
orthophoto was generated from a scanned aerial image, both 
with a GSD of 1 m. For the three test areas, up-to-date vector 
databases representing the 2D outlines of buildings were 
available. They served as a reference in the test. In order to 
achieve an objective evaluation, the outdated databases were 
simulated by manually adding or removing buildings Thus, 107 
changes (out of 1300 buildings in the scene) were simulated in 
Marseille (89 new and 18 demolished buildings); 40 (out of 
200) in Toulouse (23 new, 17 demolished) and 50 (out of 500) 
in Lyngby (29 new, 21 demolished). The outdated databases 
were converted to binary building masks having the same GSD 
as the input data and then distributed to the participants along 
with input data. 
Each group participating in the test was asked to deliver a 
change map in which each building of the vector database is 
labelled either as unchanged, demolished or new. Because the 
methods have been developed in different contexts, their 
designs noticeably differ, for instance regarding the definitions 
of the classes considered in the final change map - e.g. four 
classes for (Champion, 2007) and six classes for (Rottensteiner, 
2008) - and the format of the input data - e.g. vector for 
(Champion, 2007) and raster for (Matikainen et al., 2007). As a 
work-around, it was decided to use the building label image 
representing the updated version of the building map (cf. 
Section 3) for the evaluation of those methods that do not 
deliver the required change map in the way described above. 
Only the method by (Champion, 2007) delivered such a change 
map, which was also directly used in the evaluation. 
In order to evaluate the results achieved by the four algorithms, 
they are compared to the reference database, and the 
completeness and the correctness of the results (Heipke et al., 
1997) are derived as quality measures: 
TP 
Complétenos = 
TF + FN 
. TP 
Correctness = 
TP + FP 
In Equation 1, TP, FP, and FN are the numbers of True 
Positives, False Positives, and False Negatives, respectively. 
They refer to the update status of the vector objects in the 
automatically-generated change map, compared to their real 
update status given by the reference. In the case where the final 
change map is directly used for the evaluation, i.e. with 
(Champion, 2007), a TP is an object of the database reported as 
changed (demolished or new) that is actually changed in the 
reference. A FP is an object reported as changed by the 
algorithm that has not changed in the reference. A FN is an 
object that was reported as unchanged by the algorithm, but is 
changed in the reference. In the three other cases, where a 
building label image representing the updated map is used for 
the evaluation, the rules for defining an entity as a TP, a FP, or 
a FN had to be adapted. In these cases, any unchanged building 
in the reference database is considered a 77V if a predefined 
percentage (T h ) of its area is covered with buildings in the new 
label image. Otherwise, it is considered a FP, because the 
absence of any correspondence in the new label image indicates 
a change. A demolished building in the reference database is 
considered a TP if the percentage of its area covered by any 
building in the new label image is smaller than T h . Otherwise, it 
is considered to be a FN, because the fact that it corresponds to 
buildings in the new label image indicates that the change has 
remained undetected. A new building in the reference is 
considered a TP if the cover percentage is greater than T h . 
Otherwise, it is considered a FN. The remaining areas in the 
new label image that do not match any of the previous cases 
correspond to objects wrongly alerted as new by the algorithm 
and thus constitute FPs. 
The quality measures are presented in the evaluation on a per- 
building basis (rather than on a per-pixel basis), as the 
effectiveness of a change detection approach is limited by the 
number of changed buildings that is missed or over-detected 
and not by the area covered by these buildings. As explained in 
the Section 4, these quality measures are also computed 
separately for each change class. 
3. CHANGE DETECTION APPROACHES 
The four methods tested in this study are concisely presented, 
ordered alphabetically according to the corresponding author. 
Champion, 2007: The input of the method is given by a DSM, 
CIR orthophotos and the outdated vector database. Optionally, 
the original multiple images can also be used. The outcome of 
the method is a modified version of the input vector database, in 
which demolished and unchanged buildings are labelled and 
vector objects assumed to be new are created. The method starts 
with the verification of the database, where geometric 
primitives extracted from the DSM (2D contours, i.e. height 
discontinuities) and, optionally, from multiple images (3D 
segments), are collected for each object of the existing database 
and matched with primitives derived from it. A similarity score 
is then computed for each object and used to achieve a final 
decision about acceptance (unchanged) and rejection (changed 
or demolished). The second processing stage, i.e. the detection 
of new buildings, is based on a Digital Terrain Model (DTM) 
automatically derived from the DSM (Champion and Boldo, 
2006), a normalised DSM (nDSM), defined as the difference 
between the DSM and the DTM, and an above-ground mask, 
processed from the nDSM by thresholding. Appropriate 
morphological tools are then used to compare this latter mask to 
the initial building mask derived from the vector database and a 
vegetation mask computed from CIR orthophotos and an image 
corresponding to the Normalised Difference Vegetation Index 
(NDVI), which results in the extraction of new buildings. 
Matikainen et al., 2007: The building detection method of the 
Finnish Geodetic Institute (FGI) was originally developed to 
use laser scanning data as primary data. In this study, it is 
directly applied to the input DSM and CIR orthophotos. A 
raster version of the database (for a part of the study area) is 
used for training. The method includes three main steps. It starts 
with segmentation and a two-step classification of input data 
into ground and above-ground, based on a point-based analyisis 
followed by an object-based analysis and using the Terrasolid 1 
and Definiens 1 2 software systems. This is followed by the 
definition of training segments for buildings and trees and the 
classification of the above-ground segments into buildings and 
trees. This classification is based on predefined attributes and a 
classification tree (Breiman et al., 1984). A large number of 
1 http://www.terrasolid.fi/. Last visited: 30 June 2009. 
2 http://www.defmiens.com/. Last visited: 30 June 2009.
	        

Cite and reuse

Cite and reuse

Here you will find download options and citation links to the record and current image.

Monograph

METS MARC XML Dublin Core RIS Mirador ALTO TEI Full text PDF DFG-Viewer OPAC
TOC

Chapter

PDF RIS

Image

PDF ALTO TEI Full text
Download

Image fragment

Link to the viewer page with highlighted frame Link to IIIF image fragment

Citation links

Citation links

Monograph

To quote this record the following variants are available:
Here you can copy a Goobi viewer own URL:

Chapter

To quote this structural element, the following variants are available:
Here you can copy a Goobi viewer own URL:

Image

To quote this image the following variants are available:
Here you can copy a Goobi viewer own URL:

Citation recommendation

Stilla, Uwe. CMRT09. GITC, 2009.
Please check the citation before using it.

Image manipulation tools

Tools not available

Share image region

Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Contact

Have you found an error? Do you have any suggestions for making our service even better or any other questions about this page? Please write to us and we'll make sure we get back to you.

What color is the blue sky?:

I hereby confirm the use of my personal data within the context of the enquiry made.