Retrodigitalisierung Logo Full screen
  • First image
  • Previous image
  • Next image
  • Last image
  • Show double pages
Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

Access restriction

There is no access restriction for this record.

Copyright

CC BY: Attribution 4.0 International. You can find more information here.

Bibliographic data

fullscreen: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

Monograph

Persistent identifier:
856473650
Author:
Baltsavias, Emmanuel P.
Title:
Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
Sub title:
Joint ISPRS/EARSeL Workshop ; 3 - 4 June 1999, Valladolid, Spain
Scope:
III, 209 Seiten
Year of publication:
1999
Place of publication:
Coventry
Publisher of the original:
RICS Books
Identifier (digital):
856473650
Illustration:
Illustrationen, Diagramme, Karten
Language:
English
Usage licence:
Attribution 4.0 International (CC BY 4.0)
Publisher of the digital copy:
Technische Informationsbibliothek Hannover
Place of publication of the digital copy:
Hannover
Year of publication of the original:
2016
Document type:
Monograph
Collection:
Earth sciences

Chapter

Title:
TECHNICAL SESSION 3 OBJECT AND IMAGE CLASSIFICATION
Document type:
Monograph
Structure type:
Chapter

Chapter

Title:
LANDCOVER MAPPING BY INTERRELATED SEGMENTATION AND CLASSIFICATION OF SATELLITE IMAGES. W. Schneider, J. Steinwendner
Document type:
Monograph
Structure type:
Chapter

Contents

Table of contents

  • Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
  • Cover
  • ColorChart
  • Title page
  • CONTENTS
  • PREFACE
  • TECHNICAL SESSION 1 OVERVIEW OF IMAGE / DATA / INFORMATION FUSION AND INTEGRATION
  • DEFINITIONS AND TERMS OF REFERENCE IN DATA FUSION. L. Wald
  • TOOLS AND METHODS FOR FUSION OF IMAGES OF DIFFERENT SPATIAL RESOLUTION. C. Pohl
  • INTEGRATION OF IMAGE ANALYSIS AND GIS. Emmanuel Baltsavias, Michael Hahn,
  • TECHNICAL SESSION 2 PREREQUISITES FOR FUSION / INTEGRATION: IMAGE TO IMAGE / MAP REGISTRATION
  • GEOCODING AND COREGISTRATION OF MULTISENSOR AND MULTITEMPORAL REMOTE SENSING IMAGES. Hannes Raggam, Mathias Schardt and Heinz Gallaun
  • GEORIS : A TOOL TO OVERLAY PRECISELY DIGITAL IMAGERY. Ph.Garnesson, D.Bruckert
  • AUTOMATED PROCEDURES FOR MULTISENSOR REGISTRATION AND ORTHORECTIFICATION OF SATELLITE IMAGES. Ian Dowman and Paul Dare
  • TECHNICAL SESSION 3 OBJECT AND IMAGE CLASSIFICATION
  • LANDCOVER MAPPING BY INTERRELATED SEGMENTATION AND CLASSIFICATION OF SATELLITE IMAGES. W. Schneider, J. Steinwendner
  • INCLUSION OF MULTISPECTRAL DATA INTO OBJECT RECOGNITION. Bea Csathó , Toni Schenk, Dong-Cheon Lee and Sagi Filin
  • SCALE CHARACTERISTICS OF LOCAL AUTOCOVARIANCES FOR TEXTURE SEGMENTATION. Annett Faber, Wolfgang Förstner
  • BAYESIAN METHODS: APPLICATIONS IN INFORMATION AGGREGATION AND IMAGE DATA MINING. Mihai Datcu and Klaus Seidel
  • TECHNICAL SESSION 4 FUSION OF SENSOR-DERIVED PRODUCTS
  • AUTOMATIC CLASSIFICATION OF URBAN ENVIRONMENTS FOR DATABASE REVISION USING LIDAR AND COLOR AERIAL IMAGERY. N. Haala, V. Walter
  • STRATEGIES AND METHODS FOR THE FUSION OF DIGITAL ELEVATION MODELS FROM OPTICAL AND SAR DATA. M. Honikel
  • INTEGRATION OF DTMS USING WAVELETS. M. Hahn, F. Samadzadegan
  • ANISOTROPY INFORMATION FROM MOMS-02/PRIRODA STEREO DATASETS - AN ADDITIONAL PHYSICAL PARAMETER FOR LAND SURFACE CHARACTERISATION. Th. Schneider, I. Manakos, Peter Reinartz, R. Müller
  • TECHNICAL SESSION 5 FUSION OF VARIABLE SPATIAL / SPECTRAL RESOLUTION IMAGES
  • ADAPTIVE FUSION OF MULTISOURCE RASTER DATA APPLYING FILTER TECHNIQUES. K. Steinnocher
  • FUSION OF 18 m MOMS-2P AND 30 m LANDS AT TM MULTISPECTRAL DATA BY THE GENERALIZED LAPLACIAN PYRAMID. Bruno Aiazzi, Luciano Alparone, Stefano Baronti, Ivan Pippi
  • OPERATIONAL APPLICATIONS OF MULTI-SENSOR IMAGE FUSION. C. Pohl, H. Touron
  • TECHNICAL SESSION 6 INTEGRATION OF IMAGE ANALYSIS AND GIS
  • KNOWLEDGE BASED INTERPRETATION OF MULTISENSOR AND MULTITEMPORAL REMOTE SENSING IMAGES. Stefan Growe
  • AUTOMATIC RECONSTRUCTION OF ROOFS FROM MAPS AND ELEVATION DATA. U. Stilla, K. Jurkiewicz
  • INVESTIGATION OF SYNERGY EFFECTS BETWEEN SATELLITE IMAGERY AND DIGITAL TOPOGRAPHIC DATABASES BY USING INTEGRATED KNOWLEDGE PROCESSING. Dietmar Kunz
  • INTERACTIVE SESSION 1 IMAGE CLASSIFICATION
  • AN AUTOMATED APPROACH FOR TRAINING DATA SELECTION WITHIN AN INTEGRATED GIS AND REMOTE SENSING ENVIRONMENT FOR MONITORING TEMPORAL CHANGES. Ulrich Rhein
  • CLASSIFICATION OF SETTLEMENT STRUCTURES USING MORPHOLOGICAL AND SPECTRAL FEATURES IN FUSED HIGH RESOLUTION SATELLITE IMAGES (IRS-1C). Maik Netzband, Gotthard Meinel, Regin Lippold
  • ASSESSMENT OF NOISE VARIANCE AND INFORMATION CONTENT OF MULTI-/HYPER-SPECTRAL IMAGERY. Bruno Aiazzi, Luciano Alparone, Alessandro Barducci, Stefano Baronti, Ivan Pippi
  • COMBINING SPECTRAL AND TEXTURAL FEATURES FOR MULTISPECTRAL IMAGE CLASSIFICATION WITH ARTIFICIAL NEURAL NETWORKS. H. He , C. Collet
  • TECHNICAL SESSION 7 APPLICATIONS IN FORESTRY
  • SENSOR FUSED IMAGES FOR VISUAL INTERPRETATION OF FOREST STAND BORDERS. R. Fritz, I. Freeh, B. Koch, Chr. Ueffing
  • A LOCAL CORRELATION APPROACH FOR THE FUSION OF REMOTE SENSING DATA WITH DIFFERENT SPATIAL RESOLUTIONS IN FORESTRY APPLICATIONS. J. Hill, C. Diemer, O. Stöver, Th. Udelhoven
  • OBJECT-BASED CLASSIFICATION AND APPLICATIONS IN THE ALPINE FOREST ENVIRONMENT. R. de Kok, T. Schneider, U. Ammer
  • Author Index
  • Keyword Index
  • Cover

Full text

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
46 
LANDCOVER MAPPING BY INTERRELATED SEGMENTATION AND CLASSIFICATION 
OF SATELLITE IMAGES 
W. Schneider, J. Steinwendner 
Institute of Surveying, Remote Sensing and Land Information (IVFL), 
Universität für Bodenkultur (BOKU, University of Agricultural Sciences) Vienna, 
Peter-Jordan-Str. 82, A-1190 Vienna, Austria, {schneiwe,joachim}@mail.boku.ac.at 
KEYWORDS: Landcover Mapping, Optical Satellite Images, Computer Vision, Segmentation, Classification. 
ABSTRACT 
Landcover maps produced from satellite images by classical pixelwise statistical classification are less than satisfactory in most cases. 
One of the reasons for this is that shape information and expert knowledge on the spatial arrangements of the individual landcover 
patches are neglected. In tin effort to simulate the working method of a human interpreter, image segmentation may be employed in 
addition to classification. The purpose of image segmentation in general is the delineation of image objects (groups of pixels) with a 
meaning in the real world. The purpose of image segmentation in landcover mapping is to obtain segments representing patches of 
distinct landcover, such as agricultural fields, forest stands, rivers, lakes, etc. The problem is the interdependence of segmentation 
and classification: Classification results are needed as input for a meaningful segmentation, and, vice versa, the segmentation results 
are required for a good classification (e.g. using texture and shape parameters). 
After a short overview of segmentation methods, this contribution concentrates on segment growing methods for segmentation. 
Starting from a seed pixel, a segment is grown by adding neighbouring pixels as long as certain homogeneity criteria are fulfilled. 
The strategy for combined segmentation and classification for landcover mapping is based on: (i) the proper choice of seeds 
according to pixelwise classification, preventing e.g. the selection of mixed pixels as seed (which might lead to the formation of 
meaningless segments), (ii) land-cover-specific homogeneity criteria, causing segments to grow right to the boundaries of landcover 
patches, (iii) spatial subpixel analysis methods, reducing the influence of mixed pixels, and (iv) use of shape parameters of segments 
for classification refinement. The method is illustrated with examples of landcover mapping from Landsat TM images. 
1. PROBLEM DEFINITION 
1.1. Scope of this contribution 
This contribution deals with methodical problems of automated 
mapping of landcover from satellite images. The information 
needed for this essentially is of a biophysical nature and thus 
can be derived to a large extent from the remotely sensed 
images. In contrast to this, landuse mapping needs additional 
information on functional, socio-economic and cultural aspects, 
which often have to be taken from other sources (GIS). Landuse 
aspects are not considered here, although landcover maps may 
be used to derive landuse maps at a later stage. The restriction 
to optical satellite images implies that geometrical aspects of 
landcover identification, in particular 3D-effects, are neglected. 
The general discussion of the problem presented here can be 
adapted to various special applications, e.g. forest mapping. 
1.2. Pixel-based versus segment-based classification 
Automated thematic mapping from remotely sensed images 
conventionally is performed by pixelwise statistical classifica 
tion. The main drawback of pixelwise classification is the fact 
that it neglects shape and context aspects of the image 
information, which are among the main clues for a human 
interpreter. In contrast to pixel-by-pixel techniques, image 
understanding (computer vision, knowledge-based) methods try 
to simulate human visual interpretation (Haralick and Shapiro, 
1992, Gonzales and Woods, 1993). These techniques are based 
on the conceptual analysis model shown in Figure 1. 
The model follows the general approach of analytical science 
and technology of breaking complex reality down into indivi 
dual objects, identifying these objects, determining their attri 
butes and establishing relationships between the objects. 
Starting from a digital image, "objects" are delimited in the 
segmentation process. These "image objects" can conceptually 
be areas, lines, or points. In actuality, the image objects are sets 
of adjacent pixels having a meaning in the scene (i.e. the 
section of the real world shown in the image). 
The objects of an image are, in a second process, classified, i.e. 
each object is assigned to one category out of a set of pre 
defined categories, on the basis of the objects’ attributes and 
relationships to other objects. This classification can be seen as 
a process of "matching" (establishing correspondences) with 
prototypes (defining the categories) stored in a knowledge base. 
Classified image objects are termed “scene objects”. 
From a formal point of view, pixelwise classification may also 
be subsumed under this conceptual model. In this case, the 
segmentation process is left out, and the image objects to be 
classified are the individual pixels.
	        

Cite and reuse

Cite and reuse

Here you will find download options and citation links to the record and current image.

Monograph

METS MARC XML Dublin Core RIS Mirador ALTO TEI Full text PDF DFG-Viewer OPAC
TOC

Chapter

PDF RIS

Image

PDF ALTO TEI Full text
Download

Image fragment

Link to the viewer page with highlighted frame Link to IIIF image fragment

Citation links

Citation links

Monograph

To quote this record the following variants are available:
Here you can copy a Goobi viewer own URL:

Chapter

To quote this structural element, the following variants are available:
Here you can copy a Goobi viewer own URL:

Image

To quote this image the following variants are available:
Here you can copy a Goobi viewer own URL:

Citation recommendation

baltsavias, emmanuel p. Fusion of Sensor Data, Knowledge Sources and Algorithms for Extraction and Classification of Topographic Objects. RICS Books, 1999.
Please check the citation before using it.

Image manipulation tools

Tools not available

Share image region

Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Contact

Have you found an error? Do you have any suggestions for making our service even better or any other questions about this page? Please write to us and we'll make sure we get back to you.

How many grams is a kilogram?:

I hereby confirm the use of my personal data within the context of the enquiry made.