Full text: Proceedings, XXth congress (Part 4)

'anbul 2004 
Vj .V5 are 
€ / with the 
source. The 
an inverse 
RGB space 
> please 
sis method 
al statistical 
h correlated 
These new 
the original 
ding, image 
nage fusion. 
> image with 
inate system 
  
PC3* (6) 
ent to values 
trix D with 
e covarlanc 
Atrix satisfies 
(7) 
orresponding 
to merge lli 
similar to the 
of the PCA 
ned back into 
— 
c 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004 
  
2.3 Fusion method based on a shift invariant extension of the 
DWT 
More recently, the wavelet transform has been used for merging 
data derived from different resolution sensors (Rockinger, 
1996). To overcome the shift dependency of the wavelet fusion 
method, the input images must be decomposed into a shift 
invariant representation. For convenience, we 
summarize this approach for the case of ID input signals. 
wavelet 
As for the discrete wavelet transform (DWT), each stage of the 
shift invariant DWT (SIDWT) splits the input sequence into the 
wavelet sequence W;(n) and the scale sequence S;(n) which 
serves as input for the next decomposition level (Rockinger, 
1996): 
W,(n) = Y 2055 (n —k) (9) 
k 
$5400 & V. Ah! K)s on: K) (10) 
k 
The zero'th level scale sequence is set equal to the input 
sequence. So (n) — f(n). thus defining the complete SIDWT 
decomposition scheme. In contrast to the standard DWT 
decomposition scheme the subsampling is dropped, resulting in 
a highly redundant wavelet representation. The analysis filters 
g(2'k) and A(2'k) at level i are obtained by inserting the 
appropriate number of zeros between the filter taps of the 
prototype filters g(k) and AK). 
The reconstruction of the input sequence is performed by the 
inverse SIDWT as a convolution of both shift invariant wavelet 
sequence and scale sequence with the appropriate 
reconstruction filters g(2'.k) and h(2'k) as follows: 
s,(n)= y h(2in- K).s; 4 Q0) + > g(2 n =k)w;,,,(n) (11) 
k k 
2.4. Fusion based on a Laplacian pyramid method 
The Laplacian filtered image can be realized as a difference 
Gaussian filtered images. Accordingly the Laplacian pyramid is 
obtainable Let G* be the 
from the Gaussian pyramid. 
hg : S a : 1201 i 
k (kzL..,N) level of the Gaussian pyramid for an image /. 
Then 
ES (13) 
GA GE], for k=1....N-1 
where the kernel w is obtained a discrete Gaussian density, 
*' denotes two-dimensional convolution and the notation 
Ladys indicates that the image in brackets is down-sampled by 2 
(in both, horizontal and vertical directions) which is 
accomplished by selecting every other point in the filtered 
mage. The Gaussian pyramid is a set of lowpass filtered copies 
à the image, each with a cut-off frequency one octave lower 
than its predecessor. The Laplacian pyramid is determined by 
fu =GY 
se (14) 
| -G' 49g [o^ I for 
k =0,.., N-1 
891 
where the notation [...]7> Indicates that the image inside the 
brackets is up-sampled by 2 (in both the horizontal and vertical 
directions). Here, convolution by the Gaussian kernel has the 
effect of interpolation by a low-pass filter. 
The Laplacian pyramid transform decomposes the image into 
multiple levels. Each level in the Laplacian pyramid represents 
the result of convolving the original image with a difference of 
two Gaussian functions thus each successive level is a band- 
passed, sub-sampled and scaled version of the original image. 
The Laplacian pyramid has a perfect reconstruction property; 
the original image can be reconstructed by reversing the 
Laplacian pyramid operations: 
(15) 
for k=0....,N-1 
G is identical to the original image /. 
Fusion is performed in the Laplacian pyramid domain by 
constructing a fused pyramid. The pyramid coefficient (or 
hyperpixel) at each location in the fused pyramid is obtained by 
selecting the hyperpixel of the sensor pyramid that has the 
largest absolute value. Let. L4 and Lpgbe the Laplacian 
pyramids of two images A and B. With L; the fused pyramid is 
denoted which is determined by 
[thai i ool» 
  
; iij | 
D (i, j) Otherwise 
where k is the level of the pyramid and (i,j) denotes a hyperpixel 
at that level (Sharma, 1999). 
2.5. Fusion method based on Contrast pyramids 
Toet (1990) introduced an image fusion technique which 
preserves local luminance contrast in the sensor images. The 
technique is based on selection of image features with 
maximum contrast rather than maximum magnitude (Sharma. 
1999). It is motivated by the fact that the human visual system 
is based on contrast and hence the resulting fused image will 
provide better details to a human observer. The pyramid 
decomposition used for this technique is related to luminance 
processing in the early stages of the human visual system which 
are sensitive to local luminance contrast (Toet, 1990). Fusion is 
performed using the multiresolution contrast pyramid. The j^ 
level R; of the contrast pyramid is obtained by: 
for'’k=1,..N-1 
(17) 
  
The hyperpixels of the contrast pyramid R are related to the 
local luminance contrast. Luminance contrast C is defined as: 
L-L L 
Ca DL 
Ly Ly 
=] (18) 
where L is the luminance at a certain location in the image and 
Ly is the luminance of the local background. The denominator 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.