In: Wagner W., Szekely, B. (eds.): ISPRS TC VII Symposium - 100 Years ISPRS, Vienna, Austria, July 5-7, 2010, IAPRS, Vol. XXXVIII, Part 7B
384
where Aj and B- are the compared bands of a multispectral
image, RMSE is root mean squared error, ¡u A is the mean value
of A}, K is the number of bands, ^ is high/low resolution
images ratio; Zero mean normalised cross-correlation, ZNCC:
ZNCC(A i ,B i ) =
M N
Z Z<,Ai(m,n)~fiAiWi.™**)-MB')
m=1 n=1 (4)
M N M N 7
Z I
/n=1 n-1 m=1 n=1
where Aj, Bj are the compared images; ji A ,, ju B are the averages
of the images Aj,Bj, respectively; M, N is the size of the
compared images.
2.2 Spatial consistency
Spatial consistency is another aspect of fused imagery
assessment. Up to now not many papers deal with spatial
consistency assessment. Almost all the works use single scale
edge detector (Gradient, Laplacian, Sobel edge detector) and an
evaluation metric to calculate the distance between the edge
maps (usually correlation coefficient) (Shi, 2003; Zhou, 1998;
Pradhan, 2006). Here the comparison is made between the fused
bands and the corresponding panchromatic image. Another
approach calculates the percentage of true and false edges
introduced into the fused band using Sobel edge detector
(Pradhan, 2006). Several works on fusion report use of SSIM
and ERGAS measures for spatial consistency assessment (Lillo-
Saavedra, 2005) (panchromatic image is used as the reference
instead of a spectral band).
In this paper we propose to use an additional measure for spatial
consistency assessment. This measure uses phase congruency
(PC) (Kovesi, 1999) for feature extraction on an image.
Invariance to intensity and contrast change as well as multiscale
nature of this measure allows to obtain more confident
assessment comparing to single-scale edge detectors.
3. PHASE CONGRUENCY FOR SPATIAL
CONSISTENCY ASSESSMENT
3.1 Phase congruency
Phase congruency was proposed as intensity and contrast
invariant dimensionless measure of feature significance, and
used for signal matching and feature extraction (Kovesi, 1999).
Phase congruency at point x may be defined in the following
way:
To Ts K MLFA S0 (x)a<d„ (x) - t„ J
,PA„(x) + e ’ W
where FA S0 is the amplitude of the component in Fourier series
expansion, AO so is the phase deviation function, W 0 is the PC
weighting function, o is the index over orientation, s is the index
over scale, T 0 is the noise compensation term, e is the term
added to prevent division by zero, |_ J means that the enclosed
quantity is permitted to be non-negative (Kovesi, 1999).
A bank of 2D Log Gabor wavelets is used for feature
extraction. Different scale and orientation of the wavelets in the
bank allow extracting more information about the structure
(detail) of the image under assessment.
Multiscale image analysis instead of single-scale gradient
operators allows to extract more information on image
structure, features and edges. The result of PC extraction is
phase congruency feature map. This map represents the
structure of the image and allows to perform feature based
image comparison.
3.2 Comparison metric
Zero mean normalized cross correlation was selected as a
comparison metric of PC feature maps. Liu et. al. report on
successful application of the metric for this task (Liu, 2008).
ZNCC produces a real value in the range [-1,1] where 1
indicates full similarity of compared maps and -1 indicates
absolute dissimilarity.
Pan-sharpened spectral band and corresponding panchromatic
image are used for extraction of PC feature maps, and the maps
are compared using ZNCC. The panchromatic image is used as
the reference image for spatial consistency assessment (Figure
1).
3.3 Assessment protocol
The benefit of PC application for assessment may be illustrated
by comparison with other assessment methods on pan-
sharpened dataset, which consists of fused images with known
quality. PC is expected to show similar trend with other
assessment measures and provide similar assessment results.
Well-known fusion methods should be used in order to produce
the dataset with expected quality.
Several well-known pan-sharpening methods were selected to
produce fused images with expected quality (spatial and
spectral consistency): Intensity-Hue-Saturation (IHS) image
fusion (Welch, 1987), image fusion using Principal Component
Analysis (PCA) (Welch, 1987), wavelet image fusion (Aiazzi,
2002), and General Image Fusion method (GIF) (Wang, 2005).
Generally, well-known methods IHS and PCA produce fusion
results with proper spatial consistency; wavelet fusion produces
proper spectral consistency; GIF method produces a
compromise of acceptable spectral and spatial consistency.
Fusion methods can be sorted according to the quality of the
produced result: in the sense of spectral or in the sense of
spatial consistency. These methods were chosen as reference
methods to produce expected results for pan-sharpened dataset
used for assessment and comparison.
During the first assessment setup, a set of multispectral images
was pan-sharpened by the following methods: IHS, PCA, A
trous wavelet image fusion (ATWT, cubic B-spline), and by
two modifications of General Image Fusion method (GIF-1 and
GIF-2). GIF-1 extracts high-resolution image detail (high
frequency component) from panchromatic image and adds to
interpolated spectral image. The amount of transferred image
detail data is established using regression (Starovoitov, 2007).
GIF-2 employs image detail addition to interpolated spectral
image (Ehlers, 2004). IHS and PCA image fusion methods were
run using ENVI software, while all the other fusion methods
were implemented using IDL system.