You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
Baltsavias, Emmanuel P.

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
W. Schneider, J. Steinwendner
Institute of Surveying, Remote Sensing and Land Information (IVFL),
Universität für Bodenkultur (BOKU, University of Agricultural Sciences) Vienna,
Peter-Jordan-Str. 82, A-1190 Vienna, Austria, {schneiwe,joachim}@mail.boku.ac.at
KEYWORDS: Landcover Mapping, Optical Satellite Images, Computer Vision, Segmentation, Classification.
Landcover maps produced from satellite images by classical pixelwise statistical classification are less than satisfactory in most cases.
One of the reasons for this is that shape information and expert knowledge on the spatial arrangements of the individual landcover
patches are neglected. In tin effort to simulate the working method of a human interpreter, image segmentation may be employed in
addition to classification. The purpose of image segmentation in general is the delineation of image objects (groups of pixels) with a
meaning in the real world. The purpose of image segmentation in landcover mapping is to obtain segments representing patches of
distinct landcover, such as agricultural fields, forest stands, rivers, lakes, etc. The problem is the interdependence of segmentation
and classification: Classification results are needed as input for a meaningful segmentation, and, vice versa, the segmentation results
are required for a good classification (e.g. using texture and shape parameters).
After a short overview of segmentation methods, this contribution concentrates on segment growing methods for segmentation.
Starting from a seed pixel, a segment is grown by adding neighbouring pixels as long as certain homogeneity criteria are fulfilled.
The strategy for combined segmentation and classification for landcover mapping is based on: (i) the proper choice of seeds
according to pixelwise classification, preventing e.g. the selection of mixed pixels as seed (which might lead to the formation of
meaningless segments), (ii) land-cover-specific homogeneity criteria, causing segments to grow right to the boundaries of landcover
patches, (iii) spatial subpixel analysis methods, reducing the influence of mixed pixels, and (iv) use of shape parameters of segments
for classification refinement. The method is illustrated with examples of landcover mapping from Landsat TM images.
1.1. Scope of this contribution
This contribution deals with methodical problems of automated
mapping of landcover from satellite images. The information
needed for this essentially is of a biophysical nature and thus
can be derived to a large extent from the remotely sensed
images. In contrast to this, landuse mapping needs additional
information on functional, socio-economic and cultural aspects,
which often have to be taken from other sources (GIS). Landuse
aspects are not considered here, although landcover maps may
be used to derive landuse maps at a later stage. The restriction
to optical satellite images implies that geometrical aspects of
landcover identification, in particular 3D-effects, are neglected.
The general discussion of the problem presented here can be
adapted to various special applications, e.g. forest mapping.
1.2. Pixel-based versus segment-based classification
Automated thematic mapping from remotely sensed images
conventionally is performed by pixelwise statistical classifica
tion. The main drawback of pixelwise classification is the fact
that it neglects shape and context aspects of the image
information, which are among the main clues for a human
interpreter. In contrast to pixel-by-pixel techniques, image
understanding (computer vision, knowledge-based) methods try
to simulate human visual interpretation (Haralick and Shapiro,
1992, Gonzales and Woods, 1993). These techniques are based
on the conceptual analysis model shown in Figure 1.
The model follows the general approach of analytical science
and technology of breaking complex reality down into indivi
dual objects, identifying these objects, determining their attri
butes and establishing relationships between the objects.
Starting from a digital image, "objects" are delimited in the
segmentation process. These "image objects" can conceptually
be areas, lines, or points. In actuality, the image objects are sets
of adjacent pixels having a meaning in the scene (i.e. the
section of the real world shown in the image).
The objects of an image are, in a second process, classified, i.e.
each object is assigned to one category out of a set of pre
defined categories, on the basis of the objects’ attributes and
relationships to other objects. This classification can be seen as
a process of "matching" (establishing correspondences) with
prototypes (defining the categories) stored in a knowledge base.
Classified image objects are termed “scene objects”.
From a formal point of view, pixelwise classification may also
be subsumed under this conceptual model. In this case, the
segmentation process is left out, and the image objects to be
classified are the individual pixels.