13
npervious surface
in environmental
\ssociation 62(2):
)02). "Impervious
t literature and its
mal of Planning
>. "An Advanced
ultitemporal SAR
)SCIENCE AND
t applications in
nation and urban
iE Trans. Geosci.
Change Detection
teristics of SAR
Remote Sensing,
rvious surface in
I of Environment
imperviousness."
1.
ure analysis for
nagery." Remote
>ach for mapping
ise of Landsat 7
Canadian Journal
1 (2003). "Urban
1 imperviousness
Photogrammetric
010.
by Competitive
; Research Grant
il Science and
>e University of
to the Chinese
iduate Research
A NOVEL FUSION METHOD OF SAR
AND OPTICAL IMAGES FOR URBAN OBJECT EXTRACTION* *
Jia Yonghong a b,c , Rick S. Blum c ,Ma Yunxia 3
3 School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, China - yhjia2000@sina.com,
myx 162636@ 163 .com
b State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China -
yhjia2000@sina.com
c Electrical and Computer Engineering Dept, Lehigh University, Bethlehem, PA USA-rblum@eecs.lehigh.edu
Commission VII, WG VII/6
KEY WORDS: Fusion, Image, method, SFIM, Texture, Information extraction
ABSTRACT:
A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is
extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted
from the available Pan image by means of the a trous wavelet decomposition. Then, high pass details modulated with the texture is
applied to obtain the fusion product by high pass filtering based on modulation (HPFM) fusion method. A set of image data
including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate
spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture
information enhances the fusion product, and the proposed approach is effective for image interpret and classification.
1. INTRODUCTION
Image fusion is capable of integrating different imagery data
creating more information than that from a single sensor, and it
has received tremendous attention in the remote sensing
literature. Many image fusion algorithms and software tools
have been developed, such as the IHS (Intensity, Hue,
Saturation), PCA (Principal Components Analysis), SVR
(Synthetic Variable Ratio) and wavelet based fusion (Alparone
et al, 2004). However, such available algorithms are not
efficient for the fusion of SAR and optical images any more. In
an urban area, many land cover types/surface materials are
spectrally similar. This makes it extremely difficult to analyze
an urban scene using a single sensor (Forster, 1985; Hepner and
Houshmand, 1998). Some of these features can be discriminated
in a radar image based on their dielectric properties and surface
roughness. The objective of our study is to present a novel
image fusion method of SAR, Panchromatic (PAN) and
multispectral (MS) data for urban object extraction. SAR
texture is extracted by ratioing the despeckled SAR image to its
low pass approximation, and is used to modulate high pass
details extracted from the available Pan image by means of the
a trous wavelet decomposition. High pass details modulated
with the SAR texture is applied with high pass filtering based
on modulation (HPFM) to obtain the fusion product. The
following is introduction of the proposed fusion method.
2. METHODOLOGY
2.1 A trous wavelet
Wavelet transform produces the images in different resolution.
Wavelet representation refers to both spatial and frequency
space. It can show a good position of an image in spatial and
frequency space(Ranchin and Wald , 2000)
There are different approaches to do wavelet decomposition.
One of them is Mallat algorithm which can use wavelet
function such as Daubechies functions. Here we use the a trous
algorithm, which uses dyadic wavelet to merge non-dyadic data
in a simple and efficient procedure. In this algorithm for the
discrete wavelet transform we must do the successive
convolution with a filter. To convolve the image and the filter,
we use convolution function directly. In each step we get a
version of the image I u I 2 The wavelet coefficient is
defined as the following
wc L = I L . r I L L=\,2,...,n (1)
If we decompose an image I into wavelet coefficients, then we
can write
n
l='Z WC l +I r W
L=\
in which I r is a residual image. In this approach all wavelet
planes have the same number of pixels as the original image.
The project supported by the State Surveying and Mapping Fund of China.yhjia2000@sina.com