Full text: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
131 
interpretation can be increased by using the information from 
preceding images. Hence, it becomes possible to distinguish for 
example between the construction and the dismantling of 
buildings or between the regeneration and degeneration of 
moorland areas [Pakzad, 1999]. To realize such a multi temporal 
analysis, an interpretation system must be able to administrate 
images from different time instances and to represent and exploit 
information about possible or at least probable temporal changes. 
The leading idea of this work is to automate the evaluation of 
aerial images of complex scenes using prior knowledge about the 
object structure, GIS, sensor type, and temporal changes. To ease 
the adaptation of the analysis system to new requirements and the 
extension to future tasks, the knowledge is represented explicitly 
and is separated from system control. Such a so-called 
knowledge based approach constitutes the focal point of this 
work. 
In the literature various approaches to image interpretation and 
sensor fusion have been presented. Only a few authors try to 
formalize the representation of the objects and sensors, and the 
control of the information integration. Most interpretation 
systems like SPAM (McKeown, 1985) and SIGMA (Matsuyama, 
1990) use a hierarchic control and construct the objects 
incrementally using multiple levels of detail. The system 
MESSIE (Clement, 1993) models the objects explicitly 
distinguishing four views: geometry, radiometry, spatial context, 
and functionality. It employs frames and production rules. In the 
BPI system (Stilla, 1997) a net of production rules representing a 
part-of-hierarchy describes the structural prior knowledge. A 
blackboard realized by an associative memory is used for process 
communication. Another blackboard-based architecture is 
suggested by Mees (1998). He distinguishes between strategy 
knowledge represented by an AND/OR-tree, global knowledge 
described by sensor-independent fuzzy production rules, and 
sensor-dependent local knowledge stored in attributed 
prototypes and image processing operators called local detectors. 
ERNEST (Kummert, 1993) uses semantic nets to exploit the 
object structure for interpretation. The MOSES system extends 
the ERNEST approach to extract man-made objects from aerial 
images (Quint, 1997). The presented system AIDA (Liedtke, 
1997) adopts the idea to formulate prior knowledge about the 
scene objects with semantic nets. In addition, the control 
knowledge is represented explicitly by rules which are selected 
by an inference engine. 
In the following, the system architecture of AIDA is described 
and a common concept is presented to distinguish between the 
semantics of objects and their visual appearance in the different 
sensors considering the physical principle of the sensor and the 
material and surface properties of the objects. The necessary 
extensions to provide a multitemporal image analysis are 
described and illustrated in chapter 5. 
2. KNOWLEDGE BASED INTERPRETATION SYSTEM 
For the automatic interpretation of remote sensing images, the 
knowledge based system AIDA (Liedtke, 1997; Tonjes, 1999) 
has been developed. The prior knowledge about the objects to be 
extracted is represented explicitly in a knowledge base. 
Additional domain specific knowledge like GIS data can be used 
to strengthen the interpretation process. From the prior 
knowledge, hypotheses about the appearance of the scene objects 
are generated which are verified in the sensor data. An image 
processing module extracts features that meet the constraints 
given by the expectations. It returns the found primitives - like 
line segments - to the interpretation module which assigns a 
semantic meaning to them, e.g. road or river. The system finally 
generates a symbolic description of the observed scene. In the 
following, the knowledge representation and the control scheme 
of AIDA is described. 
2.1. Knowledge Representation 
The knowledge representation is based on semantic nets. 
Semantic nets are directed acyclic graphs and they consist of 
nodes and edges in between. The nodes represent the objects 
expected in the scene, while the edges or links of the semantic net 
form the relations between these objects. Attributes define the 
properties of nodes and edges. 
The nodes of the semantic net model the objects of the scene and 
their representation in the image. Two classes of nodes are 
distinguished: the concepts are generic models of the object and 
the instances are realizations of their corresponding concepts in 
the observed scene. Thus, the knowledge base which is defined 
prior to the image analysis is built out of concepts. During 
interpretation a symbolic scene description is generated 
consisting of instances. 
The object properties are described by attributes attached to the 
nodes. They have a value measured in the data and a range 
describing the expected attribute value. During instantiation the 
attribute range of the instance is taken from the corresponding 
concept and - if possible - is restricted further by the information 
of instantiated parent nodes. For example, an already detected 
street segment can constrain the position of the adjacent segment. 
For both attribute value and attribute range a computation 
method can be defined. A judgement function computes the 
compatibility of the measured value with the expected range. 
The relations between the objects are described by edges or links 
forming the semantic net. The specialization of objects is 
described by the is-a relation introducing the concept of 
inheritance. Along the is-a link, all attributes, edges and 
functions are inherited to the more special node which can be 
overwritten locally. Objects are composed of parts represented 
by the part-of link. Thus, the detection of an object can be 
reduced to the detection of its parts. The transformation of an 
abstract description into its more concrete representation in the 
data is modelled by the concrete-of relation, abbreviated con-of. 
This relation allows to structure the knowledge in different 
conceptual layers like for example a scene layer and a sensor
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.