Retrodigitalisierung Logo Full screen
  • First image
  • Previous image
  • Next image
  • Last image
  • Show double pages
Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

Access restriction

There is no access restriction for this record.

Copyright

CC BY: Attribution 4.0 International. You can find more information here.

Bibliographic data

fullscreen: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

Monograph

Persistent identifier:
856473650
Author:
Baltsavias, Emmanuel P.
Title:
Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
Sub title:
Joint ISPRS/EARSeL Workshop ; 3 - 4 June 1999, Valladolid, Spain
Scope:
III, 209 Seiten
Year of publication:
1999
Place of publication:
Coventry
Publisher of the original:
RICS Books
Identifier (digital):
856473650
Illustration:
Illustrationen, Diagramme, Karten
Language:
English
Usage licence:
Attribution 4.0 International (CC BY 4.0)
Publisher of the digital copy:
Technische Informationsbibliothek Hannover
Place of publication of the digital copy:
Hannover
Year of publication of the original:
2016
Document type:
Monograph
Collection:
Earth sciences

Chapter

Title:
TECHNICAL SESSION 3 OBJECT AND IMAGE CLASSIFICATION
Document type:
Monograph
Structure type:
Chapter

Chapter

Title:
INCLUSION OF MULTISPECTRAL DATA INTO OBJECT RECOGNITION. Bea Csathó , Toni Schenk, Dong-Cheon Lee and Sagi Filin
Document type:
Monograph
Structure type:
Chapter

Contents

Table of contents

  • Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
  • Cover
  • ColorChart
  • Title page
  • CONTENTS
  • PREFACE
  • TECHNICAL SESSION 1 OVERVIEW OF IMAGE / DATA / INFORMATION FUSION AND INTEGRATION
  • DEFINITIONS AND TERMS OF REFERENCE IN DATA FUSION. L. Wald
  • TOOLS AND METHODS FOR FUSION OF IMAGES OF DIFFERENT SPATIAL RESOLUTION. C. Pohl
  • INTEGRATION OF IMAGE ANALYSIS AND GIS. Emmanuel Baltsavias, Michael Hahn,
  • TECHNICAL SESSION 2 PREREQUISITES FOR FUSION / INTEGRATION: IMAGE TO IMAGE / MAP REGISTRATION
  • GEOCODING AND COREGISTRATION OF MULTISENSOR AND MULTITEMPORAL REMOTE SENSING IMAGES. Hannes Raggam, Mathias Schardt and Heinz Gallaun
  • GEORIS : A TOOL TO OVERLAY PRECISELY DIGITAL IMAGERY. Ph.Garnesson, D.Bruckert
  • AUTOMATED PROCEDURES FOR MULTISENSOR REGISTRATION AND ORTHORECTIFICATION OF SATELLITE IMAGES. Ian Dowman and Paul Dare
  • TECHNICAL SESSION 3 OBJECT AND IMAGE CLASSIFICATION
  • LANDCOVER MAPPING BY INTERRELATED SEGMENTATION AND CLASSIFICATION OF SATELLITE IMAGES. W. Schneider, J. Steinwendner
  • INCLUSION OF MULTISPECTRAL DATA INTO OBJECT RECOGNITION. Bea Csathó , Toni Schenk, Dong-Cheon Lee and Sagi Filin
  • SCALE CHARACTERISTICS OF LOCAL AUTOCOVARIANCES FOR TEXTURE SEGMENTATION. Annett Faber, Wolfgang Förstner
  • BAYESIAN METHODS: APPLICATIONS IN INFORMATION AGGREGATION AND IMAGE DATA MINING. Mihai Datcu and Klaus Seidel
  • TECHNICAL SESSION 4 FUSION OF SENSOR-DERIVED PRODUCTS
  • AUTOMATIC CLASSIFICATION OF URBAN ENVIRONMENTS FOR DATABASE REVISION USING LIDAR AND COLOR AERIAL IMAGERY. N. Haala, V. Walter
  • STRATEGIES AND METHODS FOR THE FUSION OF DIGITAL ELEVATION MODELS FROM OPTICAL AND SAR DATA. M. Honikel
  • INTEGRATION OF DTMS USING WAVELETS. M. Hahn, F. Samadzadegan
  • ANISOTROPY INFORMATION FROM MOMS-02/PRIRODA STEREO DATASETS - AN ADDITIONAL PHYSICAL PARAMETER FOR LAND SURFACE CHARACTERISATION. Th. Schneider, I. Manakos, Peter Reinartz, R. Müller
  • TECHNICAL SESSION 5 FUSION OF VARIABLE SPATIAL / SPECTRAL RESOLUTION IMAGES
  • ADAPTIVE FUSION OF MULTISOURCE RASTER DATA APPLYING FILTER TECHNIQUES. K. Steinnocher
  • FUSION OF 18 m MOMS-2P AND 30 m LANDS AT TM MULTISPECTRAL DATA BY THE GENERALIZED LAPLACIAN PYRAMID. Bruno Aiazzi, Luciano Alparone, Stefano Baronti, Ivan Pippi
  • OPERATIONAL APPLICATIONS OF MULTI-SENSOR IMAGE FUSION. C. Pohl, H. Touron
  • TECHNICAL SESSION 6 INTEGRATION OF IMAGE ANALYSIS AND GIS
  • KNOWLEDGE BASED INTERPRETATION OF MULTISENSOR AND MULTITEMPORAL REMOTE SENSING IMAGES. Stefan Growe
  • AUTOMATIC RECONSTRUCTION OF ROOFS FROM MAPS AND ELEVATION DATA. U. Stilla, K. Jurkiewicz
  • INVESTIGATION OF SYNERGY EFFECTS BETWEEN SATELLITE IMAGERY AND DIGITAL TOPOGRAPHIC DATABASES BY USING INTEGRATED KNOWLEDGE PROCESSING. Dietmar Kunz
  • INTERACTIVE SESSION 1 IMAGE CLASSIFICATION
  • AN AUTOMATED APPROACH FOR TRAINING DATA SELECTION WITHIN AN INTEGRATED GIS AND REMOTE SENSING ENVIRONMENT FOR MONITORING TEMPORAL CHANGES. Ulrich Rhein
  • CLASSIFICATION OF SETTLEMENT STRUCTURES USING MORPHOLOGICAL AND SPECTRAL FEATURES IN FUSED HIGH RESOLUTION SATELLITE IMAGES (IRS-1C). Maik Netzband, Gotthard Meinel, Regin Lippold
  • ASSESSMENT OF NOISE VARIANCE AND INFORMATION CONTENT OF MULTI-/HYPER-SPECTRAL IMAGERY. Bruno Aiazzi, Luciano Alparone, Alessandro Barducci, Stefano Baronti, Ivan Pippi
  • COMBINING SPECTRAL AND TEXTURAL FEATURES FOR MULTISPECTRAL IMAGE CLASSIFICATION WITH ARTIFICIAL NEURAL NETWORKS. H. He , C. Collet
  • TECHNICAL SESSION 7 APPLICATIONS IN FORESTRY
  • SENSOR FUSED IMAGES FOR VISUAL INTERPRETATION OF FOREST STAND BORDERS. R. Fritz, I. Freeh, B. Koch, Chr. Ueffing
  • A LOCAL CORRELATION APPROACH FOR THE FUSION OF REMOTE SENSING DATA WITH DIFFERENT SPATIAL RESOLUTIONS IN FORESTRY APPLICATIONS. J. Hill, C. Diemer, O. Stöver, Th. Udelhoven
  • OBJECT-BASED CLASSIFICATION AND APPLICATIONS IN THE ALPINE FOREST ENVIRONMENT. R. de Kok, T. Schneider, U. Ammer
  • Author Index
  • Keyword Index
  • Cover

Full text

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
Edges are discontinuities in the gray levels of an image. 
Except for noise or systematic sensor errors, edges are caused 
by events in the object space. Examples of such events 
include physical boundaries of objects, shadows, and 
variations in the reflectance of material. It follows that edges 
are useful features, as they often convey information about 
objects in one way or another. 
Segmentation is another useful step in extracting information 
about objects. Segmentation entails grouping pixels that share 
similar characteristics. Unfortunately, this is a quite vague 
definition and not surprisingly often defined by the 
application. 
The output of the first stage is already a bit more abstract than 
the sensory input data. We see a transition from signals to 
symbols, however primitive they may still be. These primitive 
symbols are now subject of a grouping process that attempts 
to perceptually organize them. Organization is one of the first 
steps in perception. The goal of grouping is to find and 
combine those symbols that relate to the same object. Again, 
the governing grouping principles may be application 
dependent. 
The next step in model-based object recognition consists of 
comparing the extracted and grouped features (data model) 
with a model of the real object (object model), a process 
called matching. If there is sufficient agreement, then the data 
model is labeled with the object and undergoes a validation 
procedure. Crucial in the matching step is the object model 
and the representation compatibility between the data and 
object model. It is fruitless to describe an object by properties 
that cannot be extracted from the sensor data. Take color, for 
example, and the case of a roof. If only monochromatic 
imagery is available then we cannot use ‘red’ in the roof 
description. 
The sequential way on how the paradigm is presented is often 
called bottom-up or data driven. A model driven or top-down 
approach follows the opposite direction. Here, domain 
specific knowledge would trigger expectations, where objects 
may occur in the data. In practice, both approaches are 
combined. 
2.2. Multisensor fusion 
Multisensor integration means the synergistic use of the 
information provided by multiple sensory devices to assist the 
accomplishment of a task by a system. The literature on 
multisensor integration in computer vision and machine 
intelligence is substantial. For an extensive review, we refer 
the interested reader to Abidi and Gonzalez (1992), or Hall 
(1992). 
At the heart of multisensor integration lies multisensor fusion. 
Multisensor fusion refers to any stage of the integration 
process where information from different sensors is combined 
(fused) into one representation form. Hence, multisensor 
fusion can take place at the signal, pixel, feature, or symbol 
level of representation. Most sensors typically used in practice 
provide data that can be fused at one or more of these levels. 
Signal-level fusion refers to the combination of signals from 
different sensors with the objective of providing a new signal 
that is usually of the same form but of better quality. In pixel- 
level fusion, a new image is formed through the combination 
of multiple images to increase the information content 
associated with each pixel. Feature-level fusion helps making 
feature extraction more robust and creating composite 
features from different signals and images. Symbol-level 
fusion allows the information from multiple sensors to be 
used together at the highest level of abstraction. 
Like in object recognition, identity fusion begins with the 
preprocessing of the raw sensory data, followed by feature 
extraction. Having extracted the features or feature vectors, 
identity declaration is performed by statistical pattern 
recognition techniques, or geometric models. The identity 
declarations must be partitioned into groups that represent 
observations belonging to the same observed entity. This 
partitioning - known as association - is analogous to the 
process of matching data models with object models in model 
based object recognition. Finally, identity fusion algorithms, 
such as feature-based inference techniques, cognitive-based 
models, or physical modeling are used to obtain a joint 
declaration of identity. Alternatively, fusion can occur at the 
raw data level or at the feature level. Examples for the 
different fusion types include pixel labeling from raw data 
vectors (fusion at data or pixel level), segmenting surfaces 
from fused edges extracted from aerial imagery and combined 
with laser measurements (feature level fusion), and 
recognizing buildings by using ‘building candidate’ objects 
from different sensory data (decision level fusion). 
Pixel level fusion is only recommended for images with 
similar exterior orientation, similar spatial, spectral and 
temporal resolution, and capturing the same or similar 
physical phenomena. Often, these requirements are not 
satisfied. Such is the case when images record information 
from very different regions of the EM spectrum (e.g., visible 
and thermal), or if they were collected from different 
platforms, or else have significantly different sensor geometry 
and associated error models. In these instances, preference 
should be given to the individual segmentation of images, 
with feature or decision level fusion. Yet another 
consideration for fusion is related to the physical phenomena 
in object space. Depending on the level of grouping, extracted 
features convey information that can be related to physical 
phenomena in the object space. Obviously, features extracted 
from different sensors should be fused when they have been 
caused by the same physical property. Generally, the further 
the spectral bands are apart, the lesser the features extracted 
from them are caused by the same physical phenomena. On 
the other hand, as the level of abstraction increases, more and
	        

Cite and reuse

Cite and reuse

Here you will find download options and citation links to the record and current image.

Monograph

METS MARC XML Dublin Core RIS Mirador ALTO TEI Full text PDF DFG-Viewer OPAC
TOC

Chapter

PDF RIS

Image

PDF ALTO TEI Full text
Download

Image fragment

Link to the viewer page with highlighted frame Link to IIIF image fragment

Citation links

Citation links

Monograph

To quote this record the following variants are available:
Here you can copy a Goobi viewer own URL:

Chapter

To quote this structural element, the following variants are available:
Here you can copy a Goobi viewer own URL:

Image

To quote this image the following variants are available:
Here you can copy a Goobi viewer own URL:

Citation recommendation

baltsavias, emmanuel p. Fusion of Sensor Data, Knowledge Sources and Algorithms for Extraction and Classification of Topographic Objects. RICS Books, 1999.
Please check the citation before using it.

Image manipulation tools

Tools not available

Share image region

Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Contact

Have you found an error? Do you have any suggestions for making our service even better or any other questions about this page? Please write to us and we'll make sure we get back to you.

Which word does not fit into the series: car green bus train:

I hereby confirm the use of my personal data within the context of the enquiry made.