Full text: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
123 
OPERATIONAL APPLICATIONS OF MULTI-SENSOR IMAGE FUSION 
C. Pohl ’, H. Touron 2 
1 International Institute for Aerospace Survey and Earth Sciences (ITC), P.O. Box 6, 7500 AA Enschede, The Netherlands, 
pohl@itc.nl 
2 Western European Union Satellite Centre (WEUSC), P.O. Box 511, 28850 Torrejon de Ardoz (Madrid), Spain 
KEYWORDS: WEU, Visual Interpretation, Data and Image Fusion, Operationalization, Complementarity 
ABSTRACT 
Space-based observation provides repeated, unrestricted access to every comer of the globe, in full compliance with international 
law, and this capability may provide early warning of crises before action has to be taken to deal with them. Risks can be assessed 
before they turn into threats. The Western European Union Satellite Centre (WEUSC) operationally exploits imagery derived from 
different Earth observation satellites for security and defence purposes. At the WEUSC, the remote sensing data is digitally processed 
to enhance visual image interpretation capabilities. One of the processes applied is image fusion. This paper reports on the 
experiences gained using image fusion as a tool to integrate multi-sensor images in order to benefit from increased spatial, spectral 
and temporal resolution, in addition to increased reliability and reduced ambiguity. After a short introduction, the concept of image 
fusion as it is implemented at the WEUSC is described, followed by an explanation of the processing involved. An overview of 
operational applications using image fusion as a major step prior to visual image interpretation allows the compilation of a list of 
operational statements, vital in the implementation of image fusion. A very important factor of applying image fusion is the 
integration of complementary data. The complementarity of visible and infrared (VIR) with synthetic aperture radar (SAR) images is 
a well known example, where the objects contained in the images are 'seen' from very different perspectives (wavelength and viewing 
geometry). The integration of high resolution and multispectral information forms another type of complementarity. This paper 
provides an overview of issues in operationally-used image fusion relating to the processing involved and discusses benefits and 
limitations of approaches, illustrated by examples. All results have to be viewed in the context of visual image exploitation. 
1. INTRODUCTION 
Multi-sensor data fusion appears to be widely recognized in the 
remote sensing user community. This is obvious from the 
amount of conferences and workshops focussing on data fusion, 
as well as the special issues of scientific journals dedicated to 
the topic. Previously, data fusion, and in particular image 
fusion belonged to the world of research and development. In 
the meantime it has become a valuable technique for data 
enhancement in many applications. More and more data 
providers envisage the marketing of fused products. Software 
vendors started to offer pre-defined fusion methods within their 
generic image processing packages. 
The WEUSC operationally exploits images from many different 
sensors available on the market in order to respond to requests 
from the WEU council or WEU Member States in the fields of 
general security surveillance, support for Petersberg missions 1 
and surveillance in more specific spheres. In order to perform 
multi-sensor image interpretation as required operationally from 
the Satellite Centre, the image analysts process the imagery to 
obtain enhanced and suitable image products. One of the 
approaches utilized is image fusion. Tools for and training 
material on image fusion have been developed and implemented 
1 Petersberg missions: WEU term to describe missions ranging 
from humanitarian and rescue tasks to tasks involving combat 
forces in crisis management, including peacemaking. 
in order to support the daily work of the image analysts. The 
data fusion system (DFS) is used to benefit from improved 
spectral, spatial and temporal resolution. Furthermore, the 
consideration of multi-sensor data ensures the availability of 
data when it is needed and the replacement of deficiencies 
contained in satellite imagery (e.g. cloud cover). In addition, 
image fusion can contribute to more reliability and reduced 
ambiguity of the interpretation results. 
The following sections provide a short description of the fusion 
approach established and experiences obtained in an 
operational environment illustrated by real world examples. 
2. IMAGE FUSION 
Image fusion aims at the integration of complementary data to 
enhance the information content of the imagery, i.e. make the 
imagery more useful to a particular application. From the 
experiences gained in the past, it is clear that the selection of an 
image fusion approach depends on the desired application. The 
definition of image combinations and techniques depends on 
the characteristics a dataset should have in order to serve the 
user (Pohl and van Genderen, 1998). However, it is possible to 
summarize a general approach, which describes the overall 
processing chain needed in order to achieve image fusion (see 
Fig. 1.). Basically, the dataset to be fused has to be pre- 
processed in order to achieve conformity or data alignment as 
defined by Wald (1998b). In the case of multi-sensor image
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.