Retrodigitalisierung Logo Full screen
  • First image
  • Previous image
  • Next image
  • Last image
  • Show double pages
Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

Access restriction

There is no access restriction for this record.

Copyright

CC BY: Attribution 4.0 International. You can find more information here.

Bibliographic data

fullscreen: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

Monograph

Persistent identifier:
856473650
Author:
Baltsavias, Emmanuel P.
Title:
Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
Sub title:
Joint ISPRS/EARSeL Workshop ; 3 - 4 June 1999, Valladolid, Spain
Scope:
III, 209 Seiten
Year of publication:
1999
Place of publication:
Coventry
Publisher of the original:
RICS Books
Identifier (digital):
856473650
Illustration:
Illustrationen, Diagramme, Karten
Language:
English
Usage licence:
Attribution 4.0 International (CC BY 4.0)
Publisher of the digital copy:
Technische Informationsbibliothek Hannover
Place of publication of the digital copy:
Hannover
Year of publication of the original:
2016
Document type:
Monograph
Collection:
Earth sciences

Chapter

Title:
TECHNICAL SESSION 3 OBJECT AND IMAGE CLASSIFICATION
Document type:
Monograph
Structure type:
Chapter

Chapter

Title:
INCLUSION OF MULTISPECTRAL DATA INTO OBJECT RECOGNITION. Bea Csathó , Toni Schenk, Dong-Cheon Lee and Sagi Filin
Document type:
Monograph
Structure type:
Chapter

Contents

Table of contents

  • Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
  • Cover
  • ColorChart
  • Title page
  • CONTENTS
  • PREFACE
  • TECHNICAL SESSION 1 OVERVIEW OF IMAGE / DATA / INFORMATION FUSION AND INTEGRATION
  • DEFINITIONS AND TERMS OF REFERENCE IN DATA FUSION. L. Wald
  • TOOLS AND METHODS FOR FUSION OF IMAGES OF DIFFERENT SPATIAL RESOLUTION. C. Pohl
  • INTEGRATION OF IMAGE ANALYSIS AND GIS. Emmanuel Baltsavias, Michael Hahn,
  • TECHNICAL SESSION 2 PREREQUISITES FOR FUSION / INTEGRATION: IMAGE TO IMAGE / MAP REGISTRATION
  • GEOCODING AND COREGISTRATION OF MULTISENSOR AND MULTITEMPORAL REMOTE SENSING IMAGES. Hannes Raggam, Mathias Schardt and Heinz Gallaun
  • GEORIS : A TOOL TO OVERLAY PRECISELY DIGITAL IMAGERY. Ph.Garnesson, D.Bruckert
  • AUTOMATED PROCEDURES FOR MULTISENSOR REGISTRATION AND ORTHORECTIFICATION OF SATELLITE IMAGES. Ian Dowman and Paul Dare
  • TECHNICAL SESSION 3 OBJECT AND IMAGE CLASSIFICATION
  • LANDCOVER MAPPING BY INTERRELATED SEGMENTATION AND CLASSIFICATION OF SATELLITE IMAGES. W. Schneider, J. Steinwendner
  • INCLUSION OF MULTISPECTRAL DATA INTO OBJECT RECOGNITION. Bea Csathó , Toni Schenk, Dong-Cheon Lee and Sagi Filin
  • SCALE CHARACTERISTICS OF LOCAL AUTOCOVARIANCES FOR TEXTURE SEGMENTATION. Annett Faber, Wolfgang Förstner
  • BAYESIAN METHODS: APPLICATIONS IN INFORMATION AGGREGATION AND IMAGE DATA MINING. Mihai Datcu and Klaus Seidel
  • TECHNICAL SESSION 4 FUSION OF SENSOR-DERIVED PRODUCTS
  • AUTOMATIC CLASSIFICATION OF URBAN ENVIRONMENTS FOR DATABASE REVISION USING LIDAR AND COLOR AERIAL IMAGERY. N. Haala, V. Walter
  • STRATEGIES AND METHODS FOR THE FUSION OF DIGITAL ELEVATION MODELS FROM OPTICAL AND SAR DATA. M. Honikel
  • INTEGRATION OF DTMS USING WAVELETS. M. Hahn, F. Samadzadegan
  • ANISOTROPY INFORMATION FROM MOMS-02/PRIRODA STEREO DATASETS - AN ADDITIONAL PHYSICAL PARAMETER FOR LAND SURFACE CHARACTERISATION. Th. Schneider, I. Manakos, Peter Reinartz, R. Müller
  • TECHNICAL SESSION 5 FUSION OF VARIABLE SPATIAL / SPECTRAL RESOLUTION IMAGES
  • ADAPTIVE FUSION OF MULTISOURCE RASTER DATA APPLYING FILTER TECHNIQUES. K. Steinnocher
  • FUSION OF 18 m MOMS-2P AND 30 m LANDS AT TM MULTISPECTRAL DATA BY THE GENERALIZED LAPLACIAN PYRAMID. Bruno Aiazzi, Luciano Alparone, Stefano Baronti, Ivan Pippi
  • OPERATIONAL APPLICATIONS OF MULTI-SENSOR IMAGE FUSION. C. Pohl, H. Touron
  • TECHNICAL SESSION 6 INTEGRATION OF IMAGE ANALYSIS AND GIS
  • KNOWLEDGE BASED INTERPRETATION OF MULTISENSOR AND MULTITEMPORAL REMOTE SENSING IMAGES. Stefan Growe
  • AUTOMATIC RECONSTRUCTION OF ROOFS FROM MAPS AND ELEVATION DATA. U. Stilla, K. Jurkiewicz
  • INVESTIGATION OF SYNERGY EFFECTS BETWEEN SATELLITE IMAGERY AND DIGITAL TOPOGRAPHIC DATABASES BY USING INTEGRATED KNOWLEDGE PROCESSING. Dietmar Kunz
  • INTERACTIVE SESSION 1 IMAGE CLASSIFICATION
  • AN AUTOMATED APPROACH FOR TRAINING DATA SELECTION WITHIN AN INTEGRATED GIS AND REMOTE SENSING ENVIRONMENT FOR MONITORING TEMPORAL CHANGES. Ulrich Rhein
  • CLASSIFICATION OF SETTLEMENT STRUCTURES USING MORPHOLOGICAL AND SPECTRAL FEATURES IN FUSED HIGH RESOLUTION SATELLITE IMAGES (IRS-1C). Maik Netzband, Gotthard Meinel, Regin Lippold
  • ASSESSMENT OF NOISE VARIANCE AND INFORMATION CONTENT OF MULTI-/HYPER-SPECTRAL IMAGERY. Bruno Aiazzi, Luciano Alparone, Alessandro Barducci, Stefano Baronti, Ivan Pippi
  • COMBINING SPECTRAL AND TEXTURAL FEATURES FOR MULTISPECTRAL IMAGE CLASSIFICATION WITH ARTIFICIAL NEURAL NETWORKS. H. He , C. Collet
  • TECHNICAL SESSION 7 APPLICATIONS IN FORESTRY
  • SENSOR FUSED IMAGES FOR VISUAL INTERPRETATION OF FOREST STAND BORDERS. R. Fritz, I. Freeh, B. Koch, Chr. Ueffing
  • A LOCAL CORRELATION APPROACH FOR THE FUSION OF REMOTE SENSING DATA WITH DIFFERENT SPATIAL RESOLUTIONS IN FORESTRY APPLICATIONS. J. Hill, C. Diemer, O. Stöver, Th. Udelhoven
  • OBJECT-BASED CLASSIFICATION AND APPLICATIONS IN THE ALPINE FOREST ENVIRONMENT. R. de Kok, T. Schneider, U. Ammer
  • Author Index
  • Keyword Index
  • Cover

Full text

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
53 
INCLUSION OF MULTISPECTRAL DATA INTO OBJECT RECOGNITION 
1 2 
Bea Csathó , Toni Schenk, Dong-Cheon Lee and Sagi Filin 
1 Byrd Polar Research Center, OSU, 1090 Carmack Rd., Columbus, OH 43210, email: csatho.l@osu.edu, phone: 1-614-292-6641 
2 Department of Civil Engineering, OSU, 2070 Neil Ave., Columbus, OH 43210 email: schenk.2@osu.edu, phone: 1-614-292-7126 
KEYWORDS: Data fusion, multisensor, classification, urban mapping, surface reconstruction. 
ABSTRACT 
In this paper, we describe how object recognition benefits from exploiting multispectral and multisensor datasets. After a brief 
introduction we summarize the most important principles of object recognition and multisensor fusion. This serves as the basis for 
the proposed architecture of a multisensor object recognition system. It is characterized by multistage fusion, where the different 
sensory input data are processed individually and only merged at appropriate levels. The remaining sections describe the major 
fusion processes. Rather than providing detailed descriptions, a few examples, obtained from the Ocean City test-data site, have been 
chosen to illustrate the processing of the major data streams. The test site comprises of multispectral and aerial imagery, and laser 
scanning data. 
1. INTRODUCTION 
The ultimate goal of digital photogrammetry is the automation 
of map making. This entails understanding aerial imagery and 
recognizing objects - both hard problems. Despite of the 
increased research activities and the remarkable progress that 
has been achieved, systems are still far from being operational 
and the far-reaching goal of an automatic map making system 
remains a dream. 
Before an object, e.g. a building, can be measured, it must 
first be identified as such. Fully automated systems have been 
developed for recognizing certain objects, such as buildings 
and roads on monocular aerial imageries, but their 
performance largely depends on the complexity of the scene 
and other factors (Shufelt, 1999). However, the utilization of 
multiple sensory input data, or other ancillary data, such as 
DEMs or GIS layers, opens new avenues to approach the 
problem. By combining sensors that use different physical 
principles and record different properties of the object space, 
complementary and redundant information becomes available. 
If merged properly, multisensor data may lead to a more 
stable and consistent scene description. Active research topics 
in object recognition include multi-image techniques using 
3D feature extraction, DEM analysis or range images from 
laser scanning, map- or GIS-based extraction, color or 
multispectral analysis, and/or a combination of these 
techniques. 
Now the cardinal question is how to exploit the potential 
these different data sources offer to tackle object recognition 
more effectively. Ideally, proven concepts and methods in 
remote sensing, digital photogrammetry and computer vision 
should be combined in a synergistic fashion. The combination 
may be possible through the use of multisensor data fusion, or 
distributed sensing. Data fusion is concerned with the 
problem of how to combine data from multiple sensors to 
perform inferences that may not be possible from a single 
sensor alone (Hall, 1992). In this paper, we propose a unified 
framework for object recognition and multisensor data fusion. 
We start out with a brief description of the object recognition 
paradigm, followed by the discussion of different 
architectures for data fusion. We then propose a multisensor 
object recognition system. The remaining sections describe 
the major fusion processes. Rather than providing detailed 
descriptions, a few examples, obtained from the Ocean City 
test-data site, have been chosen to illustrate the processing of 
the major data streams. Csatho and Schenk (1998) reported 
on earlier tests using the same dataset. The paper ends with 
conclusions and an outline of future research. 
2. BACKGROUND 
2.1. Object recognition paradigm 
At the heart of the paradigm is the recognition that it is 
impossible to bridge the gap between sensory input data and 
the desired output. Consider a gray level image as input and a 
GIS as the result of object recognition. The computer does not 
see an object, e.g., a building. All it has available at the outset 
is an array of numbers. On the output side, however, we have 
an abstract description of the object, for example, the 
coordinates of its boundary. There is no direct mapping 
between the two sets of numbers. 
A commonly used paradigm begins with preprocessing the 
raw sensory input data, followed by feature extraction and 
segmentation. Features and regions are perceptually organized 
until an object, or parts of an object, emerge from the data. 
This data model is then compared with a model of the 
physical object. If there is sufficient agreement, the data 
model is labeled accordingly. In a first step, the sensor data 
usually require some pre-processing. For example, images 
may be radiometrically adjusted, oriented and perhaps 
normalized. Similarly, raw laser altimeter data are processed 
to 3-D points in object space. 
The motivation for feature extraction is to capture information 
from the processed sensory data that is somehow related to 
the objects to be recognized. Edges are a typical example.
	        

Cite and reuse

Cite and reuse

Here you will find download options and citation links to the record and current image.

Monograph

METS MARC XML Dublin Core RIS Mirador ALTO TEI Full text PDF DFG-Viewer OPAC
TOC

Chapter

PDF RIS

Image

PDF ALTO TEI Full text
Download

Image fragment

Link to the viewer page with highlighted frame Link to IIIF image fragment

Citation links

Citation links

Monograph

To quote this record the following variants are available:
Here you can copy a Goobi viewer own URL:

Chapter

To quote this structural element, the following variants are available:
Here you can copy a Goobi viewer own URL:

Image

To quote this image the following variants are available:
Here you can copy a Goobi viewer own URL:

Citation recommendation

baltsavias, emmanuel p. Fusion of Sensor Data, Knowledge Sources and Algorithms for Extraction and Classification of Topographic Objects. RICS Books, 1999.
Please check the citation before using it.

Image manipulation tools

Tools not available

Share image region

Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Contact

Have you found an error? Do you have any suggestions for making our service even better or any other questions about this page? Please write to us and we'll make sure we get back to you.

How many grams is a kilogram?:

I hereby confirm the use of my personal data within the context of the enquiry made.