Full text: Technical Commission III (B3)

   
efinition to a less 
und objects will 
iding system, as 
image field 
M 
straight edge 
ss of the edge, so 
ions by the slope 
lculation method 
Hu et al. (2004) 
ve, we adopt the 
dge curve (Zhou, 
Q) 
gray value of the 
eters. In order to 
s of data (f(x), x) 
| image to be a 
in increasing or 
ct some feature 
he edge curve. 
dge curve to get 
3) 
alculate F(x), let 
Iter according to 
definition of an 
uld be smoothed 
e higher cut-off 
steps of the cut- 
lows: 
» block, F;. The 
iumber of image 
definitions, Fax 
block, set the 
of the Gaussian 
get the cut-off 
lock. 
(Fi-Fmin) 
Do; m Do max ET (FraxzFmin) % (Pomax E Domin) (4) 
At last, the Gaussian filter with different cut-off frequencies 
is applied to each image block to get their background image. 
3.2 Subtraction 
To subtract the background image from the original image, 
use the following formula: 
fout(i) = fin (1) — foi (i) + offset(i) (5) 
In the formula above, fout(i) is the i-th resulting image block, 
fin(i) is the i-th original image block, fp (i) is the i-th 
background image block, and offset(i) is the offset of gray 
level of the i-th image block. In order to keep the original 
image’s average lightness, offset(i) should be calculated as 
follows: 
offset(l) = —aveana) + ave + aVeer..,.... (6) 
where ave, is the average gray value of the i-th original 
image block, avepy( is the average gray value of the i-th 
background image block, and ave, ., is the average gray 
value of the whole original image. 
3.3 Removing the Border Lines between Image Blocks 
After the subtraction of the image blocks, the hues of the 
image blocks are approximately the same, but there are 
obvious differences in grayscale between adjacent blocks, 
and the obvious border lines exist. Thus, the image needs to 
be further processed to remove the border lines. 
(1) Process each image block using linear stretch based on 
the overlapped area. 
(a) Adjust the gray value of each image block horizontally. 
Based on the first image block in each row, construct the 
appropriate linear transformation models, which are acquired 
by least-square calculation for the gray values of the 
overlapped pixels between the adjacent image blocks, to 
adjust the gray values of the other image blocks in the row. 
(b)Adjust the gray value of each image block vertically 
using the same method. 
(2) Process the gray values of the pixels in the overlapped 
areas by weight. 
Through the process above, although the differences of 
grayscale between each block reduce significantly and even 
some blocks even merge well, there is still a discontinuance 
of grayscale between some adjacent blocks and the border 
lines can be easily seen. Therefore, we construct weighted 
coefficients based on the distance from the pixels in the 
overlapped area to the border line, and then use the data in 
the overlapped area to finish a gradient mosaic so as to 
eliminate visual fragmentation in the mosaic image. 
Take one-dimensional overlap for example to briefly 
illuminate the weighted coefficient (Figure 2). It is assumed 
that X and Y are the adjacent image blocks, and Z is the 
mosaic image. 
   
   
N nn 
Figure 2. The sketch map of one-dimensional overlap 
The pixel i in the overlapped area is in the column of d in Y, 
and L is the column width of the overlapped area. Then the 
grayscale value after applying the gradient mosaic according 
to weighted coefficients is 
O0 -» f (1-2) * &07 (7) 
where fz (i) is the grayscale value of the pixel i in the mosaic 
image, fy(i) is the grayscale value of the pixel i in the image 
block X, and fy (i) is the grayscale value of the pixel i in the 
image block Y. 
3.4 Stretching 
After subtraction, the contrast of the image will become low, 
so in order to increase the adjacent fine contrast and the 
overall contrast, it is necessary to stretch the image 
(Gasparini et al.,2004). 
In addition, for remote sensing images, the contrast of the 
region with higher lightness usually is higher, and the 
contrast of the region with lower lightness is lower. Even 
after dodging, this phenomenon still exists. 
Therefore, when stretching the image after subtraction, we 
should take the uneven distribution of contrast into 
consideration. For the region which is brighter in the original 
image, the degree of stretching should be less, while the 
darker region in the original image should be stretched to a 
greater degree. In this way, we can obtain a satisfactory 
resulting image with even lightness and contrast. The 
concrete steps of stretching are as follows: 
(1)Producing the background image 
Smooth the original image using the Gaussian low-pass filter 
to get the background image. Here we only need to get the 
approximate lightness trend of the original image, so it can 
be directly processed as a whole and it is not necessary to 
divide it into blocks. 
(2)Stretching the resulting image after removing the border 
lines 
Design an appropriate linear transformation model, and 
stretch the regions with different lightness in the original 
image at different degrees by adjusting the parameters. The 
linear transformation model is as follows: 
f (1j) =k* (f(i,j) = aVeori) + aveori 
. iow T 
kz1-sin CRE * (ns = fi] enn) 
where f (i,j) is the gray value of the pixel in the i-th row and 
   
   
   
  
  
   
  
  
  
   
   
  
  
   
   
  
   
    
  
  
  
  
   
     
   
  
  
    
   
   
    
   
   
   
   
    
   
   
    
   
  
  
   
  
  
  
    
    
    
   
  
   
   
  
    
  
  
    
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.