Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B7-3)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B7. Beijing 2008 
1234 
3. PARALLEL DATA INPUTING OF IMAGE 
The different arrangement of remote sensing image data in 
system memory affects the interior texture format and texture 
size, thereby affects the computing speed in GPU of the fusion 
programs. The remote sensing images to be fused are usually 
low spatial resolution multispectral images and hyperspectral 
images (or the high spatial resolution panchromatic image). One 
memory block usually stores one spectral band data (sometimes 
stores three spectral band data which calls false color composite 
image), or one hyperspectral image, or one high spatial 
resolution panchromatic image. This paper designed two 
packing ways of remote sensing image data to utilize the 
parallel processing characteristic in data-level of remote sensing 
image fusion arithmetic on GPU. 
Load to texture 
The first way is to rearrange the three multispectral images and 
the hyperspectral image (or the high spatial resolution 
panchromatic image) in memory. In packed image every four 
pixels form a cell to storage corresponding pixels in the four 
images to be fused. The height of the packed image is the same 
as the original, but the width of the packed image is 4 times as 
the original. Then load the packed image data to the texture 
memory and set the internal format of the texture to 
GLFLOATRGBA32NV, set the texture type to 
GL TEXTURE RECTANGLE ARB, so each of the 4 
components of every pixel occupies a 32 bit float space. In 
texture memory, the 4 channels of RGBA stores the 4 images to 
be fused respectively, figure 1 shows the packing process: 
IIIIIIIIIIH 
texture 
Figure 2. Load 1 image to 1 texture (4 channels of RGBA) 
But if we adopt this 2D compressed storage mode, there may 
appear some blankness. So we have to try to reduce to 
blankness to assure the best texture size and reduce the texel 
number. We use the expressions as follows to get to texture size: 
Height = floor(sqrt(N/4)); 
Width = ceil((double)(N/4)/(double)Height); 
The first packing way reduce the times of texture fetching in 
fragment shaders, so the shading process is faster than that in 
the second way, but we should rearrange the image data in 
system memory, and will waste some CPU computing time. In 
the second packing way, we needn’t to rearrange the image data 
in system memory but the texture fetching times in fragment 
shader is 4 times as in the first packing way. The data packing 
way should be chosen flexible in remote sensing image fusion 
and the effect of the design of the fragment programs by the 
data packing way. For those fusion arithmetic in spatial domain 
which is simple, the process of data rearrange in system should 
be avoid because the time cost in this step occupies a large 
proportion of the fusion time. 
Figure 1. Load 4 images to 1 texture (4 channels of RGBA) 
The second way is to load the each of the multispectral images 
and the hyperspectral image (or the high spatial resolution 
panchromatic image) to a texture separately; the internal format 
of the texture is set to GLFLOATRGBA32NV. There are 
four pixels in one texel, it means the image data is compressed 
in texture memory. By this way we can make full use of the 
four channels of every texel and the parallel computing 
capability of the fragment processor, greatly reduce the number 
of elements to be processed, and the remote image data needn’t 
to be packed in system memory, figure 2 shows the packing 
process: 
4. THE SPATIAL DOMAIN FUSION ON GPU 
There are many arithmetic of multi-sensors remote sensing 
image fusion, which can be classified to spatial domain fusion 
and transformed domain fusion [2 l In spatial domain fusion we 
adopt some arithmetic to process the registered high resolution 
image and low resolution image in spatial domain to get the 
fusion image, there are mainly four kinds of fusion arithmetic in 
spatial domain: weighted fusion, product fusion, ratio fusion 
and high-pass filtering fusion. 
The arithmetic of spatial domain fusion is relatively simple, the 
process of which in GPU is basically identical, the difference is 
just the fragment shader program. The process of spatial domain 
fusion based on GPU is as follows: 
1) Pack the remote sensing image data. Generally the 
packing method is based on the image data format and we 
should try to reduce the work of the data rearrange. 
2) Load the packed image data to texture and release the data 
in system memory. 
3) Set FBO. Confirm the number and format of the texture 
bind to FBO according to the number and the format of 
the image we want to get through the fusion arithmetic.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.