Full text: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
138 
5.4. Future Work 
To improve the extraction of halls, the segmented regions are 
planned to be splitted in compact regions and approximated by 
right-angled polygons fitted to the contours in the image. This 
will also yield a more accurate detection of the trucks. For the 
extraction of parking lots, cars, and persons, image processing 
operators and their interface to the AIDA system have to be 
implemented. Concerning the semantic net, the knowledge base 
for the fairground example has to be completed including the 
definition of computation and judgement methods. The strategy 
for the multitemporal image analysis will be tested in detail and 
improved, if necessary. 
Additionally, a concept will be developed to allow the 
monitoring of landuse changes and detection of new 
constructions using again temporal relations to model possible 
state transitions. This will be tested for the interpretation and 
monitoring of moorland areas near Hannover. 
Currently, the uncertainty and vagueness of the data is handled 
within the semantic net by a possibilistic judgement approach. It 
is planned to develop a second judgement calculus based on a 
probabilistic belief network (Bayesian net), which exploits the 
nodes and edges of the semantic net. Thereafter, a comparison of 
the two judgement approaches will be carried out. 
To get more accurate segmentation results, a self-adaptive image 
processing module based on agents is currently developed. This 
system will select, configure and adapt iteratively an appropriate 
image processing operator according to a task description 
derived from the expectations and constraints of the semantic net. 
Finally, the segmentation results matching best with the given 
task description will be returned to the semantic net. 
6. CONCLUSIONS 
A knowledge based scene interpretation system called AIDA 
was presented, which uses semantic nets, rules, and 
computational methods to represent the knowledge needed for 
the interpretation of remote sensing images. Controlled by an 
adaptable interpretation strategy, the knowledge base is 
exploited to derive a symbolic description of the observed scene 
in form of an instantiated semantic net. If available, the 
information of a GIS database is used as partial interpretation, 
increasing the reliability of the generated hypotheses. The 
system is employed for the automatic recognition of complex 
structures from multisensor images. 
Currently, extensions are made in order to provide a 
multitemporal analysis. The use of knowledge about temporal 
changes improves the generation of hypotheses for succeeding 
time instances and allows for example the extraction of complex 
structures like an industrial fairground. The temporal knowledge 
is represented in a state transition graph and integrated in the 
semantic net. A new interpretation strategy generates hypotheses 
for the successor state of an object in the next image, which are 
verified in the sensor data. The first results show that the 
knowledge based scene interpretation is a promising approach 
for the analysis of multisensor and multitemporal images. 
REFERENCES 
Clement, V., Giraudon, G., Houzelle, S. , Sadakly, F., 1993. 
’’Interpretation of Remotely Sensed Images in a Context of 
Multisensor Fusion Using a Multispecialist Architecture”, IEEE 
Trans, on Geoscience and Remote Sensing, Vol. 31, No 4, pp. 
779-791. 
Dubois, D., Prade, H., 1988. ’’Possibility Theory: An Approach 
to Computerized Processing of Uncertainty”, Plenum Press, New 
York and London, 263 p. 
F. Kummert, H. Niemann, R. Prechtel, G. Sagerer, 1993. 
’’Control and explanation in a signal understanding 
environment”, Signal Processing, Vol. 32, No. 1-2, pp. 111-146. 
Liedtke, C.-E., Buckner, J., Grau, O., Growe, S., Tonjes, R., 
1997. ’’AIDA: A System for the Knowledge Based Interpretation 
of Remote Sensing Data”, 3rd. Int. Airborne Remote Sensing 
Conference and Exhibition, Copenhagen, Denmark, pp. 
313-320. 
Matsuyama, T., Hwang, V.S.-S., 1990. ’’SIGMA : A 
Knowledge-Based Aerial Image Understanding System”, 
Plenum Press, New York, 277 p. 
McKeown, D., Wilson, A., McDermott, J., 1985. ’’Rule-Based 
Interpretation of Aerial Imagery”, IEEE Trans, on Pattern 
Analysis and Machine Intelligence, Vol. PAMI-7, No. 5, pp. 
570-585. 
Mees, W., Pemeel, C., 1998. ’’Advances in computer assisted 
image interpretation”, Informatica — International Journal of 
Computing and Informatics, Vol. 22(2), pp. 231-243. 
Pakzad, K., Buckner, J., Growe, S., 1999. ’’Knowledge Based 
Moorland Interpretation using a Hybrid System for Image 
Analysis”, In: International Archives of Photogrammetry and 
Remote Sensing, Vol. 32, Part 3-2W5, Munich, Germany (to be 
published). 
Quint, F., 1997. ’’MOSES: A structural approach to aerial image 
understanding”, In: A. Gruen, E. Baltsavias, and O.P. Henricsson 
(editors): Automatic Extraction of Man-Made Objects from 
Aerial and Space Images (II), Birkhauser, Basel, pp. 323-332. 
Stilla, U., Michaelsen, E., 1997. ’’Semantic modelling of 
man-made objects by production nets”, in A. Gruen, E. 
Baltsavias, and O.P. Henricsson (editors): Automatic Extraction 
of Man-Made Objects from Aerial and Space Images (II), 
Birkhauser, Basel, pp. 43-52. 
Tonjes, R., Growe, S., 1998. ’’Knowledge Based Road 
Extraction from Multisensor Imagery”, In: International 
Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 
3/1 , Columbus, Ohio, USA, pp. 387-393. 
Tonjes R., Growe, S., Buckner, J., Liedtke, C.-E., 1999. 
’’Knowledge Based Interpretation of Remote Sensing Images 
Using Semantic Nets”, Special ISPRS Commision III issue of 
Photogrammetric Engineering & Remote Sensing, July 1999 (to 
be published).
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.