Retrodigitalisierung Logo Full screen
  • First image
  • Previous image
  • Next image
  • Last image
  • Show double pages
Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

CMRT09

Access restriction

There is no access restriction for this record.

Copyright

CC BY: Attribution 4.0 International. You can find more information here.

Bibliographic data

fullscreen: CMRT09

Monograph

Persistent identifier:
856955019
Author:
Stilla, Uwe
Title:
CMRT09
Sub title:
object extraction for 3D city models, road databases, and traffic monitoring ; concepts, algorithms and evaluation ; Paris, France, September 3 - 4, 2009 ; [joint conference of ISPRS working groups III/4 and III/5]
Scope:
X, 234 Seiten
Year of publication:
2009
Place of publication:
Lemmer
Publisher of the original:
GITC
Identifier (digital):
856955019
Illustration:
Illustrationen, Diagramme, Karten
Language:
English
Usage licence:
Attribution 4.0 International (CC BY 4.0)
Publisher of the digital copy:
Technische Informationsbibliothek Hannover
Place of publication of the digital copy:
Hannover
Year of publication of the original:
2016
Document type:
Monograph
Collection:
Earth sciences

Chapter

Title:
COMPLEX SCENE ANALYSIS IN URBAN AREAS BASED ON AN ENSEMBLE CLUSTERING METHOD APPLIED ON LIDAR DATA P. Ramzi, F. Samadzadegan
Document type:
Monograph
Structure type:
Chapter

Contents

Table of contents

  • CMRT09
  • Cover
  • ColorChart
  • Title page
  • Workshop Committees
  • Program Committee:
  • Preface
  • Contents
  • EFFICIENT ROAD MAPPING VIA INTERACTIVE IMAGE SEGMENTATION O. Barinova, R. Shapovalov, S. Sudakov, A. Velizhev, A. Konushin
  • SURFACE MODELLING FOR ROAD NETWORKS USING MULTI-SOURCE GEODATA Chao-Yuan Lo, Liang-Chien Chen, Chieh-Tsung Chen, and Jia-Xun Chen
  • AUTOMATIC EXTRACTION OF URBAN OBJECTS FROM MULTI-SOURCE AERIAL DATA Adriano Mancini, Emanuele Frontoni and Primo Zingaretti
  • ROAD ROUNDABOUT EXTRACTION FROM VERY HIGH RESOLUTION AERIAL IMAGERY M. Ravenbakhsh, C. S. Fraser
  • ASSESSING THE IMPACT OF DIGITAL SURFACE MODELS ON ROAD EXTRACTION IN SUBURBAN AREAS BY REGION-BASED ROAD SUBGRAPH EXTRACTION Anne Grote, Franz Rottensteiner
  • VEHICLE ACTIVITY INDICATION FROM AIRBORNE LIDAR DATA OF URBAN AREAS BY BINARY SHAPE CLASSIFICATION OF POINT SETS W. Yaoa, S. Hinz, U. Stilla
  • TRAJECTORY-BASED SCENE DESCRIPTION AND CLASSIFICATION BY ANALYTICAL FUNCTIONS D. Pfeiffer, R. Reulke
  • 3D BUILDING RECONSTRUCTION FROM LIDAR BASED ON A CELL DECOMPOSITION APPROACH Martin Kada, Laurence McKinle
  • A SEMI-AUTOMATIC APPROACH TO OBJECT EXTRACTION FROM A COMBINATION OF IMAGE AND LASER DATA S. A. Mumtaz, K. Mooney
  • COMPLEX SCENE ANALYSIS IN URBAN AREAS BASED ON AN ENSEMBLE CLUSTERING METHOD APPLIED ON LIDAR DATA P. Ramzi, F. Samadzadegan
  • EXTRACTING BUILDING FOOTPRINTS FROM 3D POINT CLOUDS USING TERRESTRIAL LASER SCANNING AT STREET LEVEL Karim Hammoudi, Fadi Dornaika and Nicolas Paparoditis
  • DETECTION OF BUILDINGS AT AIRPORT SITES USING IMAGES & LIDAR DATA AND A COMBINATION OF VARIOUS METHODS Demir, N., Poli, D., Baltsavias, E.
  • DENSE MATCHING IN HIGH RESOLUTION OBLIQUE AIRBORNE IMAGES M. Gerke
  • COMPARISON OF METHODS FOR AUTOMATED BUILDING EXTRACTION FROM HIGH RESOLUTION IMAGE DATA G. Vozikis
  • SEMI-AUTOMATIC CITY MODEL EXTRACTION FROM TRI-STEREOSCOPIC VHR SATELLITE IMAGERY F. Tack, R. Goossens, G. Buyuksalih
  • AUTOMATED SELECTION OF TERRESTRIAL IMAGES FROM SEQUENCES FOR THE TEXTURE MAPPING OF 3D CITY MODELS Sébastien Bénitez and Caroline Baillard
  • CLASSIFICATION SYSTEM OF GIS-OBJECTS USING MULTI-SENSORIAL IMAGERY FOR NEAR-REALTIME DISASTER MANAGEMENT Daniel Frey and Matthias Butenuth
  • AN APPROACH FOR NAVIGATION IN 3D MODELS ON MOBILE DEVICES Wen Jiang, Wu Yuguo, Wang Fan
  • GRAPH-BASED URBAN OBJECT MODEL PROCESSING Kerstin Falkowski and Jürgen Ebert
  • A PROOF OF CONCEPT OF ITERATIVE DSM IMPROVEMENT THROUGH SAR SCENE SIMULATION D. Derauw
  • COMPETING 3D PRIORS FOR OBJECT EXTRACTION IN REMOTE SENSING DATA Konstantinos Karantzalos and Nikos Paragios
  • OBJECT EXTRACTION FROM LIDAR DATA USING AN ARTIFICIAL SWARM BEE COLONY CLUSTERING ALGORITHM S. Saeedi, F. Samadzadegan, N. El-Sheimy
  • BUILDING FOOTPRINT DATABASE IMPROVEMENT FOR 3D RECONSTRUCTION: A DIRECTION AWARE SPLIT AND MERGE APPROACH Bruno Vallet and Marc Pierrot-Deseilligny and Didier Boldo
  • A TEST OF AUTOMATIC BUILDING CHANGE DETECTION APPROACHES Nicolas Champion, Franz Rottensteiner, Leena Matikainen, Xinlian Liang, Juha Hyyppä and Brian P. Olsen
  • CURVELET APPROACH FOR SAR IMAGE DENOISING, STRUCTURE ENHANCEMENT, AND CHANGE DETECTION Andreas Schmitt, Birgit Wessel, Achim Roth
  • RAY TRACING AND SAR-TOMOGRAPHY FOR 3D ANALYSIS OF MICROWAVE SCATTERING AT MAN-MADE OBJECTS S. Auer, X. Zhu, S. Hinz, R. Bamler
  • THEORETICAL ANALYSIS OF BUILDING HEIGHT ESTIMATION USING SPACEBORNE SAR-INTERFEROMETRY FOR RAPID MAPPING APPLICATIONS Stefan Hinz, Sarah Abelen
  • FUSION OF OPTICAL AND INSAR FEATURES FOR BUILDING RECOGNITION IN URBAN AREAS J. D. Wegner, A. Thiele, U. Soergel
  • FAST VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE BURSTS Karsten Kozempel and Ralf Reulke
  • REFINING CORRECTNESS OF VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE SEQUENCES BY MEANS OF VELOCITY AND TRAJECTORY EVALUATION D. Lenhart, S. Hinz
  • UTILIZATION OF 3D CITY MODELS AND AIRBORNE LASER SCANNING FOR TERRAIN-BASED NAVIGATION OF HELICOPTERS AND UAVs M. Hebel, M. Arens, U. Stilla
  • STUDY OF SIFT DESCRIPTORS FOR IMAGE MATCHING BASED LOCALIZATION IN URBAN STREET VIEW CONTEXT David Picard, Matthieu Cord and Eduardo Valle
  • TEXT EXTRACTION FROM STREET LEVEL IMAGES J. Fabrizio, M. Cord, B. Marcotegui
  • CIRCULAR ROAD SIGN EXTRACTION FROM STREET LEVEL IMAGES USING COLOUR, SHAPE AND TEXTURE DATABASE MAPS A. Arlicot, B. Soheilian and N. Paparoditis
  • IMPROVING IMAGE SEGMENTATION USING MULTIPLE VIEW ANALYSIS Martin Drauschke, Ribana Roscher, Thomas Läbe, Wolfgang Förstner
  • REFINING BUILDING FACADE MODELS WITH IMAGES Shi Pu and George Vosselman
  • AN UNSUPERVISED HIERARCHICAL SEGMENTATION OF A FAÇADE BUILDING IMAGE IN ELEMENTARY 2D - MODELS Jean-Pascal Burochin, Olivier Tournaire and Nicolas Paparoditis
  • GRAMMAR SUPPORTED FACADE RECONSTRUCTION FROM MOBILE LIDAR MAPPING Susanne Becker, Norbert Haala
  • Author Index
  • Cover

Full text

In: Stilla U, Rottensteiner F, Paparoditis N (Eds) CMRT09. IAPRS, Vol. XXXVIII, Part 3A/V4 — Paris, France, 3-4 September, 2009 
Here, the membership degree h for every instance x, to 
cluster j, is produced based on the Euclidean distance d: 
d(x n /jj) 
where 
fj. e <g d = cluster center. 
At each iteration, the boost-clustering algorithm clusters data 
points that were hard to cluster in previous iterations. An 
important issue to be addressed here and that is the cluster 
correspondence problem between the clustering results of 
different iterations (Frossyniotis et al., 2004). 
2.2 Feature Extraction 
The first step in every clustering process is to extract the feature 
image bands. These features must contain useful information to 
discriminate between different regions of the surface. In our 
experiment we have used two types of features: 
- The filtered first pulse range image using gradient 
- Opening filtered last pulse range image 
By our experiments, these two features have enough 
infonnation to extract our objects of interest. 
The normalized difference of the first and last pulse range 
images (NDDI) is usually used as the major feature band for 
discrimination of the vegetation pixels from the others. 
However, building boundaries also show a large value in this 
image feature. It is because when the laser beam hits the 
exposed surface it will have a footprint with a size in the range 
of 15-30 cm or more. So, if the laser beam hits the edge of a 
building, then part of the beam footprint will be reflected from 
the top roof of the building and the other part might reach the 
ground (Alharthy and Bethel, 2002). The high gradient response 
on building edges was utilized to filter out the NDDI image 
using equation 6. 
NDDI = 
FPR-LPR 
FPR + LPR 
(6) 
if gradient > threshold, then (FPR-LPR) = 0.0 
where 
FPR = first-pulse range image data 
LPR = last-pulse range image data 
The gradient of an image is calculated using equation 7: 
G(image) = ^G x (image) 2 + G y (image) 2 
where 
G x = gradient operators in x direction. 
G y = gradient operators in y direction. 
(7) 
The morphology Opening operator is utilized to filter elevation 
space. This operator with a flat structuring element eliminates 
the trend surface of the terrain. The main problem of using this 
filter is to define the proper size of the structuring element 
which should be big enough to cover all 3D objects which can 
be found on the terrain surface. The Opening operation is 
defined by: 
AoB = (AQB)®B (8) 
where 
A ® B - jx: | {¿ x n^)c= a] (9) 
is the morphological Dilation of set A with structure element B. 
And 
AQB = {x | B x <z A) (10) 
is the morphological Erosion of set A with structure element B 
(Gonzalez and Woods, 2006). 
2.3 Quality Analysis 
Comparative studies on clustering algorithms are difficult due 
to lack of universally agreed upon quantitative performance 
evaluation measures (Jain et al., 1999). Many similar works in 
the clustering area use the classification error as the final 
quality measurement; so in this research, we adopt a similar 
approach. 
Here, we use error matrix as main evaluation method of 
interpretation result. Each column of this matrix indicates the 
instances in a predicted class. Each row represents the instances 
in an actual class. All the diagonal variants refer to the correct 
interpreted numbers of different classes found in reality. Some 
measures can be derived from the error matrix, such as producer 
accuracy, user accuracy and overall accuracy (Liu et al, 2007). 
Producer Accuracy (PA) is the probability that a sampled unit 
in the image is in that particular class. User Accuracy (UA) is 
the probability that a certain reference class has also been 
labelled that class. Producer accuracy and user accuracy 
measures of each class indicate the interpretability of each 
feature class. We can see the producer accuracy and user 
accuracy of all the classes in the measures of “producer overall 
accuracy” and “user overall accuracy”. 
PA, 
f N i i} 
( N i i Ì 
—Li- * 100% . 
, UA,= — 
1 N .i 1 
KJ 
: ! 00% 
(ID 
where 
N. = (i,j)th entry in confusion matrix 
N j = the sum of all columns for row i 
N is the sum of all rows for column i. 
“Overall accuracy” considers all the producer accuracy and user 
accuracy of all the feature classes. Overall accuracy yields one 
number of the whole error matrix. It‘s the sum of correctly 
classified samples divided by the total sample number from user 
set and reference set (Liu et al, 2007). 
k 
± N u 
OA = -7——— ^ * 100% ( 12 ) 
k /=i /=1 
Another factor can be also extracted from confusion matrix to 
evaluate the quality of classification algorithms, which is K-
	        

Cite and reuse

Cite and reuse

Here you will find download options and citation links to the record and current image.

Monograph

METS MARC XML Dublin Core RIS Mirador ALTO TEI Full text PDF DFG-Viewer OPAC
TOC

Chapter

PDF RIS

Image

PDF ALTO TEI Full text
Download

Image fragment

Link to the viewer page with highlighted frame Link to IIIF image fragment

Citation links

Citation links

Monograph

To quote this record the following variants are available:
Here you can copy a Goobi viewer own URL:

Chapter

To quote this structural element, the following variants are available:
Here you can copy a Goobi viewer own URL:

Image

To quote this image the following variants are available:
Here you can copy a Goobi viewer own URL:

Citation recommendation

Stilla, Uwe. CMRT09. GITC, 2009.
Please check the citation before using it.

Search results

Search results

Contenant les oeuvres de l'auteur qui n'ont pas été publiées auparavant
1 / 4
Contenant les mémoires posthumes d'Abel
Back to search results Back to search results

Image manipulation tools

Tools not available

Share image region

Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Contact

Have you found an error? Do you have any suggestions for making our service even better or any other questions about this page? Please write to us and we'll make sure we get back to you.

How much is one plus two?:

I hereby confirm the use of my personal data within the context of the enquiry made.