×

You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Title
Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
Author
Baltsavias, Emmanuel P.

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
landuse
poor
landuse
with
are two
of
of an
scene
and
is new.
of this
spectral
time
Therefore, a model-driven top-down approach can be
integrated into the common data-driven bottom-up process of
satellite image analysis. Figure 1 shows the flowchart of the
analysis process. Common satellite image analysis
techniques are restricted to pixel-based classification and
involve the use of only one feature (spectral signature) and
interactive selection of training areas. With this new
approach these restrictions are avoided.
Image and topographic databases give a description of the
scene from their respective perspectives. By using the a
priori semantic information of the topographic objects in the
map, an automated selection of training areas is performed.
This is done by overlaying the image with the topographic
objects. The geometric errors between the image and map
objects are eliminated through the large amount of training
areas and the use of a histogram analysis. The learning of
typical features for the object classes is necessary for the later
step of classification.
In the following step involving the semantic modelling of the
topographic objects, both symbolic scene descriptions are
linked and an unambiguous scene description with disjoint
objects is built. Knowledge based techniques are applied for
the classification of the resulting geometric disjoint objects.
The result of the classification process is a complete semantic
scene description.
This paper does not deal with the last step of updating the
digital database, which involves its comparison with the
semantic scene description.
2. KNOWLEDGE BASED FEATURE EXTRACTION
AND SEGMENTATION
The common features in satellite image analysis, i.e. the
spectral signatures (mean values, standard deviations), have
been proven to be insufficient for high quality results (Bähr
and Vögtle, 1991; Vögtle and Schilling, 1995). Therefore,
these features have to be extended to spectral as well as non
spectral parameters, which can contribute to an improved
distinction between the defined object classes. Thus,
geometrical and structural features are taken into account
(Table 1):
Spectral features
Spectral Signature
Texture
Non-spectral features
Structure
Size
Shape/Contour
Neighbourhood Relations
Table 1. Selected features for image analysis.
The automated extraction of the above defined features is
based on the a priori knowledge represented in the
topographic database ATKIS-DLM200, which offers both a
(possibly not up-to-date) geometric and semantic description
of those landuse objects to be extracted from satellite images.
In contrast to the commonly used method, where a human
operator interactively has to define some representative
training areas based on his experience and intuition, now all
DLM-objects of the same class within the geocoded satellite
image can be used as training areas without human
interaction. Therefore, a very large sample is taken and a
robust estimation of the defined features is performed to
exclude disturbances caused by errors in the topographic
database or in the image information, e.g. out-of-date status
of some polygons (contour lines), digitizing errors or errors
in the geometric correction of the satellite image. For a
robust estimation, it is assumed that for each class in the
image at least more than 50% of the underlying DLM object
area belongs to the DLM class, a condition which is fulfilled
in most cases.
The feature extraction process in this project contains a
hierarchical concept. The spectral characteristics of the
objects is still one of the most important features in satellite
image information. To get a robust estimation of the spectral
signatures of each object class, only the representative
reflectance values for this class are extracted. For relatively
homogeneous objects, like 'water' or 'forest', statistical
methods have been proven to be sufficient, e.g. histogram
analysis (extraction of the standard deviation) or a median
estimation. Inhomogeneous objects, like 'settlement areas',
can not be treated in this way. Typically, these areas contain
a strong mixture of different (sub-)objects (man-made
objects, meadows, gardens, trees, water areas etc.), and
therefore, a wide range of reflectance values. Nevertheless,
the accumulation of vegetation-free pixels caused by man
made objects (e.g. buildings and traffic areas) can be seen as
representative for 'settlement'. With respect to this
knowledge, vegetation-free pixels can be extracted, by means
of the NDVI (Normalized Difference Vegetation Index):
NDVI =
(IR-R)
(ir + r)
IR ...reflectance values in near infrared
R ...reflectance values in visible red
In Fig. 2, the NDVI for different topographic classes is
shown.
Fig. 2. Normalized Difference Vegetation Index (NDVI).