The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part Bl. Beijing 2008
The laser distance measurement enables the distance between
the reflective surface and aircraft to be derived. The elevation
of reflective locations can be calculated very exactly in the
geodetic framework. This is because the exact position and
flight direction of the aircraft can be determined by a Global
Positioning System (GPS) and an Inertial Navigation System
(INS) (Wehr and Lohr, 1999). Additional alignment parameters
are needed to calculate the coordinates in a laser scanner
relevant coordinate system.
2.3 Multi- and Hyperspectral Scanner
Measurements depend on illumination conditions, topography
and angle-dependent surface reflection features.
The aim of using these sensors is to derive special objects (plant
cover, crop) and object features of the surveyed areas by
analysing spectral surface reflection. For this purpose, common
remote sensing and image processing techniques (e.g.
segmentation and classification) are applied.
Multispectral scanners possess 10 bands to 12 bands (e.g.
Landsat TM); hyperspectral scanners possess more than 100
bands (e.g. HyMap). Hyperspectral scanners (imaging
spectrometer) measure object-specific signatures with high
spectral resolution. It permits the recording of an almost
continuous spectrum for every image element. Thus, objects
detected on the earth’s surface are separable and thus
classifiable. These objects exhibit characteristic absorption and
reflection features in very narrow spectral bands which cannot
be resolved by conventional sensors. However, the spatial
resolution is restricted for energetic reasons.
2.4 Radar sensors
Based on the spectral range used these sensors also work in
cloudy conditions. With the aid of radar sensors reasonable
information can also be obtained from territories with extremely
low contrast such as the ice areas of the Arctic.
Interactions between radar signals and researched objects
(reflection features and penetration depth) are determined by
the used frequency and polarisation of the radar signal.
Interferometric Synthetic Apertur Radar systems (InSAR) are
based on the analysis of phase differences between two SAR-
datasets taken from different positions. Because of the reference
from phase difference to ground level high-resolution digital
elevation models (DEM) can be generated (TSGC, 2004). 3
3. FUSION METHODS AND TECHNIQUES
In the following some fundamentals about fusion in relation to
photogrammetry and remote sensing as well as relevant
methods and applications are described.
3.1 Fusion
Within data processing various different algorithm or special
software are applied to obtain derived information from raw
sensor data. So objects and their features can be derived from
image data by segmentation algorithms, and the behaviour of
these objects in the surveyed area can be described. Based on
this information, decisions can be made. Each processing step is
equivalent to an increasing information extraction level. Fusion
with other sensors is possible on each level.
In this paper, pixel-, feature- or decision-level techniques are
subdivided (Klein, 2004) (see figure 1).
• Pixel-level fusion: combination of raw data of
different sensors, or sensor channels within a common
sensor to one single image (e.g. pan-sharpening of
Landsat imagery).
• Feature-level fusion: requires extraction of different
single features from each sensor or sensor channel
before merging them into a composite feature,
representative of the object in the common field of
view of the sensors (e.g. Tracking).
• Decision-level/information-Level fusion: combination
of the initial object detection and classification results
by the individual sensors to obtain a merged product
or even a decision by a fusion algorithm.
Figure 1. Pixel level, feature level and decision level fusion
The merged dataset has higher information content than each
individual source image of the considered scene. Due to the
competing or complementary information the result will
necessarily possess a greater richness in detail. The images are
to be co-registered prior to the fusion. This applies to both to
the spatial and the temporal aspects. Image fusion of multi
sensor images is of great importance for earth and space
observation, especially for mapping in the fields of environment,
agriculture and oceanography.
3.2 Applications and methods
Applications for sensor and data fusion are environmental
monitoring, object recognition and detection as well as change
detection (e.g. Sault et al., 2005; Hill et al., 1999; Duong, 2002;
Schatten et al., 2006; Bujor et al., 2001; Madhavan et al., 2006).
Most of the applications and methods in photogrammetry and
remote sensing are based on pixel level fusion. Only tracking
can also be carried out at feature level.
Typical methods and techniques concern the improvement of
data (resolution enhancement, bridging data gaps of other
sensors), the combination of image data and elevation or
distance data (orthophoto generation), the combination of high-
resolution panchromatic and lower-resolution multispectral data
(pan-sharpening) as well as detection and tracking within
observed areas.
Initially, pan-sharpening (e.g. the integration of high-resolution
SPOT data and multispectral Landsat data) took centre stage.
Vijayaraj performed a quantitative analysis of pansharpened
images and presented a review of common pan-sharpening
algorithms (Vijayaraj et al., 2006). High-resolution
43