×

You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Title
Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
Author
Baltsavias, Emmanuel P.

International Axchives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3^4 June, 1999
130
KNOWLEDGE BASED INTERPRETATION OF
MULTISENSOR AND MULTITEMPORAL REMOTE SENSING IMAGES
Stefan Growe
Institute of Communication Theory and Signal Processing, University of Hannover
Appelstrasse 9a, D-30167 Hannover, Germany
WWW: http://www.tnt.uni-hannover.de/~growe
E-mail: growe@tnt.uni-hannover.de
KEY WORDS: Knowledge Based Image Interpretation, Semantic Net, Sensor Fusion, Multitemporal Image Analysis.
ABSTRACT
The increasing amount of remotely sensed imagery from multiple platforms requires efficient analysis techniques. The leading idea of the
presented work is to automate the interpretation of multisensor and multitemporal remote sensing images by the use of common prior
knowledge about landscape scenes. The presented system is able to use specific map knowledge of a geoinformation system (GIS),
information about sensor projections and temporal changes of scene objects. The prior knowledge is represented explicitly by a semantic
net. A common concept has been developed to distinguish within the knowledge base between the semantics of objects and their visual
appearance in the different sensors considering the physical principle of the sensor and the material and surface properties of the objects.
In this presentation, the basic structure of the system and its use for sensor fusion on different structural and functional levels is presented.
Results are shown for the extraction of roads from multisensor images. The approach for the analysis of multitemporal images is
illustrated for the interpretation of an industrial fairground.
KURZFASSUNG
Um die immer größer werdende Menge an Femerkundungsbildem bearbeiten zu können, werden in zunehmendem Maße effiziente Aus
werteverfahren benötigt. Die Kemidee der vorliegenden Arbeit ist es, die Interpretation von multisensoriellen und multitemporalen Luft
bildern durch die Nutzung von Vörwissen über die Landschaftsobjekte zu automatisieren. Das vorgestellte System ist in der Lage, spezifi
sches Kartenwissen eines Geoinformationssystems, Informationen über Sensorabbildungen und über zeitliche Veränderungen der
Szenenobjekte für die Auswertung zu nutzen. Das Vörwissen wird explizit in einem semantischen Netz abgelegt. Es wurde ein allgemei
nes Konzept entwickelt, um innerhalb der Wissensbasis zwischen Objektsemantik und visueller Abbildung in den verschiedenen Senso
ren zu unterscheiden, wobei sowohl das physikalische Prinzip des Sensors als auch die Material- und Oberflächeneigenschaften der Ob
jekte berücksichtigt werden. In diesem Beitrag werden die Grundstruktur des Systems und dessen Nutzung für die Sensorfusion auf
verschiedenen strukturellen und funktionalen Ebenen erläutert. Beispielhaft werden Ergebnisse für Extraktion von Straßen aus multisen
soriellen Bildern präsentiert. Weiterhin wird ein Ansatz für die Analyse von multitemporalen Bildern vorgestellt und am Beispiel der
Interpretation eines Messegeländes illustriert.
1. INTRODUCTION
The automatic extraction of objects from aerial images for map
updating and environmental monitoring represents a major topic
of remote sensing. However, the results of low-level image
processing algorithms like edge detectors are in general
incomplete, fragmented, and erroneous. To overcome these
problems, a scene interpretation is performed which assigns an
object semantic to the features segmented in the remote sensing
image. Prior knowledge about the objects should be used to
constrain the object parameters and to reduce the uncertainty of
the interpretation. To increase or decrease the reliability of
competing interpretations, structural relationships of the objects
could be exploited.
data can be accessed by computers directly and is therefore
usable for the automatic interpretation of aerial images.
For remote sensing, different sensors such as optical, thermal,
and radar (SAR) have been developed which collect different
image data of the observed scene. The wish to extract more
information from the data than it is possible using a single sensor
system alone raises the question of sensor fusion. Several
parameters influence the data fusion: the different platform
locations, the different spectral bands (optical, thermal, or
microwave), the sensing geometry (e.g. perspective projection or
SAR geometry), the spatial resolution, and the season at image
acquisition. State-of-the-art-systems must be able to combine
information from different sensors.
A partial interpretation already exists for most landscapes: the
map corresponding to the observed scene. Due to the growing
availability of geographic information systems (GIS), the map
Especially for environmental monitoring, it is necessary to
investigate images from different acquisition times to study the
development of the observation area. The quality of a scene