Full text: Technical Commission III (B3)

1. DATA FUSION 
Data fusion is a good way that could contribute to implement a 
good monitoring system. 
Data fusion is analogous to the ongoing cognitive process used 
to integrate data continually from their senses to make 
inferences about the external world. 
The multi-sensor data fusion is a method to combine data from 
multiple (and possibly diverse) sensors in order to make 
inferences about a physical event, activity, or situation 
including those applications like automatic identification of 
targets or analysis of battle field situations [1]. 
However, following facts have been described [2]. 
1. Combining data from multiple inaccurate sensors, which 
have an individual probability of correct inference of less than 
0.5, does not provide a significant overall advantage 
2. Combining data from multiple highly accurate sensors, which 
have an individual probability of correct inference of greater 
than 0.95, does not provide a significant increase in inference 
accuracy 
3. When the number of sensors becomes large, adding 
additional identical sensors does not provide a significant 
improvement in inference accuracy 
4. The greatest marginal improvement in sensor fusion occurs 
for a moderate number of sensors, each having a reasonable 
probability of correct identification 
-Different levels of data fusion 
1. Pixel-level fusion: At the lowest level, uses the registered 
pixel data from all image sets to perform detection and 
discrimination functions. 
2. Feature-Level Fusion: combines the features of objects that 
are detected and segmented in the individual sensor domains 
3. Decision-Level Fusion: Fusion at the decision level (also 
called post-decision or post-detection fusion) combines the 
decisions of independent sensor detection/classification paths 
by Boolean (AND, OR) operators or by a heuristic score (e.g., 
M-of-N, maximum vote, or weighted sum). 
2. IMAGE FUSION 
Main purpose of the image fusion is to increase both the 
spectral and spatial resolution of images by combining multiple 
images. 
For doing image fusion in the best way we should pay attention 
to 3 concepts [3]. 
2.1. Image registration 
Image registration is the process that transforms several images 
into the same coordinate system. For Example, for given an 
image, several copies of the image are out-of-shape by rotation, 
shearing, twisting so this process will focus on solving these 
problems 
2.2.Image resampling 
Image resampling is the procedure that creates a new version of 
the original image with a different width and height in pixels. 
Increasing the size is called up sampling, for example on the 
contrast, decreasing the size is called downsamplig. 
2.3.Histogram matching 
Consider two images X and Y. If Y is histogram-matched to X, 
the pixel values of Y is changed, by a nonlinear transform such 
that the histogram of the new Y is the as that of X. 
3. IMAGE FUSION METH ODS 
3.1. Pan-sharpening 
The goal of pan-sharpening is to fuse a low spatial resolution 
multispectral image with a higher resolution panchromatic 
image to obtain an image with high spectral and spatial 
resolution. The Intensity-Hue-Saturation (IHS) method is a 
popular pan-sharpening method used for its efficiency and high 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
  
spatial resolution. However, the final image produced 
experiences spectral distortion. HIS stands for Hue Saturation 
Intensity (Hue Saturation Value), this method contain 3 steps: 
First: The low resolution RGB image is up sampled and 
converted to HSI space, Second: The panchromatic band is then 
matched and substituted for the Intensity band, Third: The HIS 
image is converted back to RGB space 
3.2. PC Spectral Sharpening 
We can use PC Spectral Sharpening to sharpen spectral image 
data with high spatial resolution data. A principal component 
transformation is performed on the multi-spectral data. The PC 
band 1 is replaced with the high resolution band, which is 
scaled to match the PC band 1 so no distortion of the spectral 
information occurs. Then, an inverse transform is performed; 
the multi-spectral data is automatically resampled to the high 
resolution pixel size using a nearest neighbor, bilinear or cubic 
convolution technique. 
3.3. Wavelet theory for spatial fusion 
In many cases we would wish to examine both time and 
frequency information simultaneously¢ this leads to wavelet 
transformation. wavelet transformation is a type of signal 
presentation that can give the frequency content of the signal at 
a particular instant of time what’s more it has advantages over 
traditional Fourier methods in analyzing physical situation - 
where the signal contains discontinuities and sharp spikes so 
Wavelet algorithms process data at different scales or 
resolutions. The continuous wavelet transform (CWT) is defined 
as the sum over all time of the signal multiplied by scaled, 
shifted versions of the wavelet function (¥),this part is shown 
on Equation. 1 
e 
C(Scale, position) — | f (t)w(scale, position, t)dt (1) 
—o 
-DWT (Discrete wavelet transform) 
It decomposes an image in to low frequency band and high 
frequency band in different levels, and it can also be 
reconstructed at these levels, when images are merged in this 
method different frequencies are proceeded differently, it 
improves the quality of new images so it is a good method for 
fusion at the pixel level. 
4. RESULT AND DISCUS SION 
Here we had a focus on multi time's image fusion; these images 
may be captured at different times. The object of the image 
fusion here is to retain the most desirable characteristics of each 
image to monitor moving object, We discussed different 
algorithms for data fusion at this paper but we had focus on 
Wavelet analysis for fusion of temporal images for monitoring 
moving object.The principle of image fusion using wavelets is 
to merge the wavelet decompositions of the multi times images 
using fusion methods applied to approximations coefficients 
and details coefficients. 
-In first step ,we tried to select suitable wavelet form for fusion 
so in our experiment, seven types of wavelet families are 
examined: Haar Wavelet (HW), Daubechies(db), Symlets, 
Coiflets ,Biorthogonal , Reverse Biorthogonal , Discrete 
meyer(dmey) we tried to select the best form of wavelet based 
on correlation with original image so Daubechies(dbl) was 
selected because of good result. 
Then we tried to select level of decomposition based on 
wavelet theory, the maximum level to apply the wavelet 
transform depends on how many data points contain in a data 
set, so we examined selected level based on fusion result so we 
used decomposition in two levels, it could give us the high 
quality for fusion. The result was shown on Figure 4 that yellow 
circles on the picture, shows path of moving object, so fusion 
   
  
   
    
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
    
     
    
  
   
  
    
   
  
  
  
  
   
   
    
   
   
     
  
   
    
    
  
  
| €) (C) = =: MM o "CS gm] 
Fo em fa) NY heel AR ee DO 
^ 
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.