Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B7-3)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B7. Beijing 2008 
1236 
panchromatic 
image 
Histogram 
matching 
Figure 3. The process of IHS transform fusion 
5.2 The DWT fusion on GPU 
Wavelet transform has been applied in remote sensing image 
fusion for a long time, but the traditional way is slow because 
the arithmetic is very complex. Tien-Tsin Wong [4] proposed a 
method to realize the DWT on GPU for one image, this paper 
focused on the two images fusion. 
The DWT of the digital images can be looked as a 2D DWT. 
The 2D DWT can be divided into two steps of ID DWT first 
horizontally and then vertically. Take the horizontal ID DWT 
as example. Let Xj (jl) {A. (w)jbe the input signal at level 
h ,(«)} and are the high-frequency (detail) 
coefficients and low-frequency (coarse) coefficients after 
filtering and downsampling: 
low high 
Figure 4. Mapping to the base position in ID DWT 
Assume that the length of input data sequence is P (P=9 in 
figure 4.), we should first make sure that the signal at 
n(n e [0, P — 1]) after ID DWT is a low-pass signal or a high- 
pass signal, we define a filter selector variable a: 
= < 5 > 
Tj-t (») = Yj S(k)Aj(2n +1 - k) (6) 
k 
where the parameter h(k) is the low-pass filter and g(k) is 
high-pass filter. For efficient SIMF implementation on GPU we 
rewrite (5) and (6): 
z j-i(”) = X fdj-i( n ’ k )fzj( n ’ k ) (7) 
[ 1 (high pass ), n > P / 2 
{ 0 (low pass ), n < P / 2 
(8) 
With a, we can get the position-dependent filter f dj ^ x (n,k)- 
Then we should determine the filtering center of the input signal 
corresponding to the output position n, we define the filtering 
center b which can be computed by the following equation. 
b = 2 (n-a 
P_ 
2 
) + ct + 0.5 
(9) 
where f d , (n, k) is a position-dependent filter that selects 
the proper coefficient from h(k) and g(k) at decomposition 
level j-1, f k . (n, k) is a function that returns the 
corresponding data in the level j. These can be implemented by 
the indirect addressing technique. 
Express the ID DWT to the form of signal input and output: 
0.5 is added to address the pixel center in texture fetching. We 
can get all the elements in input data for filtering if b is 
determined. 
If the fetching of neighbours goes beyond the image boundary 
of the current level, we need to extend the boundary extension. 
Common extension schemes include periodic padding, 
symmetric padding, and zero padding, etc. Figure shows the 
horizontal boundary extension (symmetric padding and the 
width of the filter kennel is 5): 
2 
1 
0 
1 
2 
3 
4 
5 
6 
7 
8 
7 
6 
boundary boundary 
Figure 5. Boundary extension
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.