International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
148
The result of the whole feature extraction process is a symbolic
description of the image content by segments (polygonal con
tour lines) with a number of attributes (signature, structure,
size, shape) and topological information (neighbourhood
relations). Starting from the result of the segmentation (see Fig.
9 as an example for the object class ’settlement’), a semantic
modelling of the scene content is developed.
35
30
•O 25
c
CO
£•20
w
a
■S 15
>,
« io
Ü
♦ Water
^ Forest
À Agriculture
# Settlement
‘»A
A A
10 15 20
Grey values (band 4)
25
30
Fig. 10. Modified standard deviation as texture parameter.
250
Ä 200
TJ
c
CO
.Q 150
50
♦ Settlement
^ Forest
A Agriculture
# Water
#
A ^ A A,
^ \ A A 1
A ^
A A
•
♦
♦i*
50 100 150 200
Grey values (band 1)
250
Fig. 11. Texture parameter homogeneity.
3. SEMANTIC MODELLING OF THE TOPOGRAPHIC
OBJECTS
The information from the segmentation and the topographic
database ATKIS with their respective extracted features are
stored in the object-relational database Postgres. Through this,
the access on the data is simplified, especially because the used
database is capable of storing and processing geometric data.
By means of an intersection of both geometric scene
descriptions from segmentation and ATKIS, we obtain a new
unambiguous scene description with disjoint objects as
explained in Section 3.1. The disjoint objects with their
processed features are also stored in the relational database.
This is our knowledge base for the next step of classification.
The classification process is performed in a semantic network,
which is able to process general and specific knowledge about
the topographic and disjoint objects. This step is explained in
section 3.2. After the classification, disjoint objects with the
same semantic meaning and a common border are merged. The
result is a complete semantic description of the scene.
3.1. Building of Disjoint Objects
Because the scene descriptions from DLM and segmentation
have ambiguous semantics for certain areas (see Fig. 12), it is
necessary to introduce a method to solve this ambiguity. To get
an unambiguous scene description, an intersection between
corresponding objects is performed.
Fig. 12. Overlapping DLM and segmentation object.
only segmentation
only DLM DLM & Segmentation
Fig. 13. Resulting disjoint objects.
As an example (see Fig. 12 and Fig. 13), the intersection bet
ween the corresponding DLM and segmentation objects is built.
The result is a set of disjoint objects with three different
semantics: the inner object where DLM and segmentation have
the same semantics and the outer objects, where either the DLM
or segmentation define the object as belonging to the object
class ’settlement’. This is a simple example, because not only
objects with the same object class may be overlayed. Of course,
many DLM objects with a class other than the one of the
intersecting image objects can be found. In addition, because of
the classwise segmentation and digitizing errors, both
geometric scene descriptions have overlapping objects from
different classes. Therefore, the intersection process is not only