tanhul 2004
and vertical
1s produces
racting the
? top) of the
orientation
is called the
idient filters
is the level
ss for fusion
mids
ual approach
ular opening
ical pyramid
reserve edges
details in an
-closing and
ise. they ar
pyramid is
ig and down-
(29)
ginal image,
n both spatial
ypening of the
represents the
=0 of th
The image a
iltering with
) followed by
je process for
issumption ol
ing the fused
sensor images
en the image
r and conta
sensor mage
ce of noise
n law.
alternative ©
ian pyramid 5
ages - pyramid
s based 0n?
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004
salience metric, and construction of the fused image from the
fused pyramid coefficients. The salience metric is e.g the energy
(or squared sum) of pyramid coefficients in an arca of e.g. 5x5
hyper-pixels. Minimum and maximum criteria are used for the
selection. The last step is to apply the inverse transform to
obtain the fused image (Sharma, 1999).
3. MATLAB IMPLEMENTATION
All described algorithms have been implemented in MATLAB.
Behind this MATLAB development is the idea to design and
implement a toolbox for image fusion. This toolbox can be
easily extended to other fusion proposals and — and this is
probably the more important issue — it can be combined with a
variety of other MATLAB functions like image registration and
sensor orientation. At the time of writing this paper quality
measures and metrics to assess and compare the quality of
image fusion products (e.g the Image Noise Index method,
Leung et.al. 2001) could not be implemented to a sufficient
extent. Ideas are published and intensively discussed on this
issue e.g. by the ERSel Special Interest Group (Wald, 2004).
Figure 1. Matlab user interface of image rectification and fusion
Figure 1 gives an impression of one of the graphical user
interfaces (GUI) of the toolbox. Of course, all developed
MATLAB functions can be applied without GUI too.
4. EXPERIMENTS AND RESULTS
The above described algorithms are applied to fuse IRS-1C and
ASTER images. The panchromatic IRS-1C has Sm pixel size;
the multispectral ASTER images have 15m pixels. The ASTER
bands B;, B», B, are used in the experiments. Both images are
shown in Figure 2. Due to scaling of the pictures the difference
in resolution of both images is not visible in Figure 2.
The concept for evaluating the fusion methods is based on the
idea to use a reduced resolution of the IRS-1C image data at
15m resolution and of the ASTER images at 45m resolution.
This maintains the resolution ratio between IRS and ASTER
and allows comparing the image fusion result at the 15m
resolution level with the original ASTER images as well as the
IRS image at 5 m resolution. Correlation is used for statistical
comparison of the fusion result with the original images.
893
Figure 2. Panchromatic IRS-1C image (left), and ASTER
original bands of B,, B;, B4in RGB format
The application of the data
fusion algorithms leads to 10
different fused images. The
result of the wavelet fusion
method is shown as one
example in Figure 3. All
other fused images are not
plotted due to the lack of
space.
Figure 3. Fusion result for the ASTER bands
The difference can be already noticed visually. To get a first
statistical quantification normalized cross-correlation is
computed between the different fusion results and the original
intensity (table and graphic in Figure 4) and spectral image
channels (table and graphic in Figure 5) which serve as a kind
of ground truth.
Correlation values with around 60% in the worst case and
around 90% in the best case can be noticed in these figures.
Altogether a quite homogenous appearance over all 12 results
can be noticed. (Note: the DWT and the shift invariant DWT
and the selection with min and max criteria have been
introduced separately in the table). Correlation differences of
plus or minus 5 % in the correlation values are already visually
noticeable. Nevertheless the limited significance of the
correlation value for the expressiveness of the fusion result
demands for more sophisticated quality measures.
5. CONCLUSION
The goal of this paper to study fusion techniques has been
approached by formulation a great variety of different fusion
procedures. The mathematical formulation of ten data fusion
techniques is worked out which includes colour
transformations, wavelet techniques, gradient and Laplacian
based techniques, contrast and morphological techniques,
feature selection and simple averaging procedures.
Quality related investigations based on correlation showed a
fairly homogenous appearance in terms of the correlation values
over all fusion results. A detailed look at the fusion result
reveals differences between the different procedures, which
have to be investigated further with more sophisticated quality
measures. Regarding the quality issues the paper delivers an
intermediate report of an ongoing research.