The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B7. Beijing 2008
1059
1111
Vi
Vi
Vi
Vi
1
i
1
-3
VTi
V12
VT2
4n
1
i
-2
0
Vi
Vi
Vi
i
-1
0
0
.7?
Vi
In general, the n-D transform matrix can be written as T
i
Tn
1
yl(n-\)n
Normalization
factor in row
l(n-i + l)(n-i+2)
(5)
A number of evaluations can be made on the above generalized
IHS (GIHS) transform. First, a variation of the above transform
may be derived based on a different calculation of the
intensities. As shown in the equations, we have used the
normalized vector as the intensity calculation. In fact, the
average of the involved bands may also be used as intensity
(Zhou et al.,1998; Nunez et al. 1999;Wang et al.,2005;
Choi,2006; Gonzalez-Audicana et al.,2006), which would lead
to different yet similar transform. Moreover, the order of the
rows in the transform is not significant and the rows are
interchangeable, however, it is recommended to keep the
intensity as the first row. Finally, the generalized transform can
be interpreted in terms of a wavelet transform in the spectral
domain across different bands at one pixel location. It can be
seen from Eq. 4 that the first row of the transformed image is
the average of all the input bands (i.e., the intensity, up to a
constant); this corresponds to the average spectral response at
this pixel location and can be interpreted as the low frequency
component in a wavelet transform. The second row is the
difference between the average of the first three bands and the
fourth band, which corresponds to a high frequency component
among the bands. Similarly, the third row is the difference
between the average of the first two bands and the third band,
while the last row is the difference of the first two bands, all up
to a normalization factor. Therefore, it is found that the
generalized IHS (so is the classical IHS) transform is essentially
equivalent to a wavelet transform in the spectral domain, where
the first component is the intensity or band average, and the
other components are band differences relative to band averages
calculated in a sequential combination of the involved bands, all
up to a constant.
To apply the above transform to image fusion, the input
multispectral bands will first be transformed to a transformed
space (equivalent to IHS in 3-D) with Eq. 3, 4 or 5. The
transformed intensities are then replaced by the gray values in
the panchromatic image. As the last step, the fused image bands
are obtained with a reverse transform T~ l .
3. CRITERIA-BASED IMAGE FUSION
The criteria-based image fusion method modifies the a-p
method introduced by (Gungor and Shan,2005, 2006). The
underlying principle is that the fused image should meet certain
desired properties represented by a set of predefined criteria.
The method forms the fused images as a linear combination of
the input panchromatic and the upsampled multispectral images
F k (m, n) = a k (m, n) • 7 0 (w, n) + b k (m, n) ■ I k (m, n) (6)
where m and n are the row and column numbers, k - 1,2, ....,
N (N = number of multispectral bands), F k is the fused image,
7 0 is the input panchromatic image, I k is the k-th band of
the resampled multispectral image, and a and b are the
weighting factors for pixel location (m, n), which control the
amount of contribution from the panchromatic image and
multispectral bands, respectively. The fusion formulation needs
to determine the a and b coefficients at every pixel location,
for which rules or criteria must be set. The selected criteria will
determine the properties of the fusion outcome. Considering
that image fusion is to retain the high spatial information or
details from the panchromatic image and the spectral
information or color from the multispectral one, we introduce
the following three criteria.
Criterion 1: The variance of the fused image should be equal to
the variance of the corresponding panchromatic image, such
that its spatial details, described by the variance, can be retained
in the fused image. Based on Eq. 6 this statement can be
expressed as
Cov(F k ,F k ) = a 2 k a 2 + 2 a k b k a ok + b 2 a 2 k = cr 2 (7)
Criterion 2: The mean of the fused image should be equal to
the corresponding mean of the multispectral image such that the
color content, described by the mean, is retained in the fused
image. Based on Eq. 6 the above statement can be expressed as
mecm{F k ) = a k n 0 + b k n k = Mk (8)
In Eq.7 and Eq.8, the notations for image location (m,n) are
omitted for a clearer expression. a k and b k coefficients are used
to construct the fused pixel at (m,n) . cr 2 , a k and cr ok are the