Full text: Mapping without the sun

213 
R is the grey level of the filtered interest pixel, 
Q VAR is the variance in filter window, 
I is the mean grey level in the filter window, 
U is the mean multiplicative noise and usually is 1, 
CP is the central pixel in filter window, 
Sigma is the multiplicative noise variance, it is estimated 
based on a Rayleigh distribution and consistent with those 
derived from actual data. 
The Gamma MAP filter is based on a multiplicative noise 
model with non-stationary mean and variance parameters. 
Recent work has shown natural vegetated areas have been 
shown to be more properly modeled as having a Gamma 
distributed cross section. This algorithm incorporates this 
assumption. The exact formula used is: 
R = { 
1 
0BxI + ^[D)/(2a) 
CP 
C <C 
I — U 
c u < c, < c, 
c, > c max 
(7) 
Where 
B = a-NLOOK-1, 
D = I 2 B 2 +4 a NLOOK I CP, 
a = (l + C 2 )/(C 2 -C 2 ), 
c u = \ Unlook , 
C, = JVARH, 
c max =4i *c u , 
NLOOK is number of looks, 
VAR is variance in filter window. 
By experiments we find using both Gamma MAP and 
Lee-Sigma filters to achieve better result than using Gamma 
MAP or Lee-Sigma filter twice. So here the SAR image is first 
filtered by Gamma MAP and then filtered by Lee-Sigma. The 
proportions of original SAR image and denoised SAR image 
are shown in figure 2. The speckle noise of denoised image has 
been obviously removed and edge features have been 
conserved. 
Figure 2. Proportions of original SAR image (left) and 
denoised SAR image (right) 
registered multi-spectral image and SAR image are 
decomposed by DT-CWT respectively, then the approximate 
and detail parts of two images are fused according to some 
rules at each level, finally the fused image is reconstructed. 
This procedure is illustrated by figure 3. The fusion procedure 
can be described in detail as following: 
(1) Each band of the multi-spectral optical image and the SAR 
image are geometrically registered to each other. After 
geometrical rectification, their sizes are same. 
(2) The gray level of SAR image is stretched tally with each 
band of multi-spectral images respectively using histogram 
equalization. 
(3) Decompose the histogram-specified SAR and registered 
multi-spectral optical images with DT-CWT to form their 
multi-resolution and multi-directional descriptions. At the same 
time, the moduli of their complex wavelet transform are 
achieved. 
(4) Since the aim of image fusion is to improve image 
information quality, we should analyze characteristics of SAR 
and optical images. Some objects, like lakes, roads or buildings, 
are distinct in SAR image but more details are hard to 
recognize. On the contrary, there are enough details and 
spectral information in optical image. So we design different 
fusion rules for low and high frequency parts fusion to integrate 
the advantages of two images. 
Image fusion begins with the coarsest level. The gray value of a 
fused low frequency part pixel is determined by maximum gray 
value rule. The bigger absolute gray value at cooresponding 
pixel between SAR and optical images is selected. This rule 
makes more approximate parts and spectral information in 
optical image conserved. 
The important information in SAR image is mostly in the high 
frequency parts. But some important details in optical image 
are also in the high frequency parts. So we decide to determine 
the fused pixel by comparing energy values of corresponding 
pixels in two images. The pixel with bigger energy value is the 
fused pixel. The energy value of a pixel is calculated in its 
centered neighbor window. Considering that DT-CWT of the 
images can be interpreted as a complex including real part and 
imaginary part, and the modulus can show clear directionality, 
the energy values can be computed according to the moduli of 
the high frequency parts. The procedure is illustrated in fig. 3. 
The wavelet coefficients at point (/ j) of real and imaginary 
parts in the SAR image are denoted as w R s (/, j) and W, s (i,j) 
respectively. The wavelet coefficients at point (/ ; y) of real 
and imaginary parts in the optical image are denoted as 
W%(i,j) anc * W,°(i,j) respectively. The magnitudes at point 
(z 5 j) in the SAR image and the optical image are achieved 
respectively by 
M s («, j) = V(^/(U)M*F/(U)) 2 
M ° (I, J) = J{w R °(i,j)) 2 + {iv 1 0 «J)ï (8) 
3.2 The fusion algorithm 
We design an algorithm based on DT-CWT for fusing a 
multi-spectral optical image and a SAR image. First the 
The energy values at point ( /; y) in the SAR image and the 
optical image are achieved respectively by
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.