Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B7-3)

1233 
THE REMOTE-SENSING IMAGE FUSION BASED ON GPU 
Jun Lu*, Baoming Zhang, Zhihui Gong, Ersen Li, Hangye Liu 
Zhengzhou Institute of Surveying and Mapping, 450052 Zhengzhou, China - Ij2000hb45@126.com 
Commission VII, WG VII/6 
KEY WORDS: GPU, FBO, Rendertotexture, Fragment program, IHS transform, DWT 
ABSTRACT: 
Along with computing capability of the Graphic Processing Unit (GPU) getting more powerful, GPU is widely applied to the general 
purpose computing not just restrict to graph manipulation. Remote sensing image data can be parallel processed in many of the 
image fusion arithmetic. The fusion arithmetic in spatial domain is mapped to the SIMD computing way on GPU. This paper realized 
the product fusion, ratio fusion, high-pass filtering fusion and weighted fusion using GLSL in spatial domain, and accelerated the 
speed of fusion computing using RTT (Render to Texture) technology. This paper focused on the arithmetic in transformed domain, 
realized IHS transform fusion and DWT (Discrete Wavelet Transform) fusion. The IHS forward transform and inverse transform are 
mapped to two fragment shading processes, parallel computing and outputting of the 3 component in both transform processes is 
realized using MRT (Multiple Render Targets) technology. 2D DWT is divided into two steps of ID DWT. An indirect address 
texture is created for every transform step and the transform of each level is based on the result stored on the texture of the last level. 
A FBO is set for every image to be fused to restore intermediate data and to do data exchange. The result shows that for the same 
fusion algorithm, the fusion images are identical using the two different methods, but the processing velocity in GPU implementation 
is obviously faster than the CPU implementation, and with the fusion algorithm getting more complicated, the fusion images getting 
bigger, the advantage of the velocity is more obvious in GPU implementation. 
1. INTRODUCTION 
The fast development of remote sensing technology makes the 
rapid increase of image data we get. The image fusion 
technology provides an approach to get the needed information 
from image data. Many algorithms of remote sensing image 
fusion process pixels of the image in the same way, but the 
programming model based on CPU is generally serial and can 
process only one datum at one time, which doesn’t make use of 
data parallel. The modem graphics processing unit has powerful 
parallel computing capability, and provides common 
functionality for both vertex and pixel shaders. In GPU, remote 
sensing image can be processed parallel and the time spent on 
image fusion will be shortened. 
2. GRAPHICS PROCESSING UNIT 
2.1 The GPU Rendering Pipeline 
The current GPU is called the “stream processor” because it has 
powerful parallel computing capability and extensive memory 
band width, all data in stream program model is called stream. 
Stream is an ordered set which has the same data type. Kernel 
operates all of the steams, takes one or more streams as the 
input data and generates one or more streams as the output data. 
There are two kinds of programmable processors in GPU: 
vertex processor and fragment processor. Vertex processor deals 
with the vertex streams which constitute the geometry models. 
The computer graph indicates a 3D object by triangulation 
network. As an illustration to the mechanism in GPU, we 
describe the rendering of a texture-mapped polygon. The user 
first define the 3D position of each vertex through the API in 
graphics library (OpenGL or DirectX).The texture coordinate 
associating with each vertex is also defined at the same time. 
These vertices are then passed to the vertex engine for 
transformation. For each of them, a vertex shader (user-defined 
program) is executed. The shader program must be SIMD in 
nature, i.e. the same set of operations has to be executed on 
different vertices. Next, the polygon is projected onto 2D and 
rasterized (discretized) to framebuffer. At this stage, the 
fragment engine takes place. For each rasterized pixel, a user- 
defined fragment shader is executed to process data associated 
with that pixel (fragment). Again, the fragment shader must also 
be SIMD in nature. In the fragment shader, the associated 
texture can be fetched for processing. To utilize the GPU for 2D 
array (image), we can simply store the 2D data on a texture map. 
Note that each data can be 32-bit floating-point. We then define 
a rectangle with this texture map mounted on it, and render this 
texture-mapped polygon to the framebuffer. 
2.2 Render To Texture 
In traditional GPU rendering pipeline, the destination of the 
rendering computing is frame buffer which is a part of the video 
card memory, the image data stored in frame buffer will display 
on the screen in real time. The window size of the frame buffer 
should be the same as the texture size is we use the traditional 
GPU rendering pipeline to do the image processing, and the 
image size can be operated in one time is restricted to a certain 
range (the packed texture size should be smaller than the screen 
size), otherwise it may cause distortion because of the 
resampling on the image. We use the FBO to realize render to 
texture. The maximal size of the texture supported by consume 
level GPU is 4096x4096 (the size is bigger in new GPU) which 
has far exceeded the screen window size, and the off-screen 
rendering mode is accelerated by the hardware [1] . 
* Corresponding author.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.