Full text: Technical Commission VII (B7)

    
overall 
nputing 
and 7, 
as the 
we put 
hift I, in 
1]. For 
and the 
as the 
(1) 
ed 7 
I, 
xels in 
ow that 
)ver the 
howing 
ce after 
ioimage 
regions 
| errors, 
1appens 
fication 
sults in 
a local 
| under 
ind the 
tions in 
)pear in 
t times. 
of color 
nify the 
E 
ination 
hooting 
nes the 
ave the 
omplex 
sfer, to 
yok like 
, which 
atistics. 
esigned 
e target 
    
  
0 x«0 
f(x) = 9 (x-g)+g, 0«x«255 @ 
where g, = intensity mean of the source image 
g, = intensity mean of the target image 
oO, = standard deviation of the source image 
C, = standard deviation of the target image 
Since the main transfer function for intensity value between 0 
and 255 is linear, we call it linear transfer function here. The 
above method is designed to assure the following rules. 
f(8.)78, (3) 
f(g.)=— a 
a, 
f(0)=0 (5) 
fQ55) = 255 (6) 
With these rules, the designed transfer function is able to map 
all the possible intensity values still to the range of [0,255], and 
also map the intensity mean from g to g,, and the standard 
deviation from 0,100,. 
  
(c) (d) 
Figure 1. Illumination change adjustment in one example (a) 
Original old year orthoimage (b) Original current year 
orthoimage (c) Adjusted current year orthoimage according to 
old year orthoimage by linear transfer function (d) Adjusted old 
year orthoimage according to current year orthoimage by linear 
transfer function. 
We show the experimental results of illumination change 
adjustment through linear transfer function in Figure 1. From it, 
we find that after adjustment, the color tone in orthoimages 
becomes more similar than the original case, i.e. (a) and (c), 
also (b) and (d) are more similar in illumination than (a) and (b). 
A piecewise cubic spline transfer function is also proposed in 
[5], to overcome the fast saturation near very low and very high 
intensity values. According to our experimental results on 
several datasets, the illumination change adjustment under this 
function results in fewer wrong detections, but also fewer 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
correct detections. To assure no correct detections are missed, 
we decide to take the linear transfer function in this framework. 
Furthermore, when it comes to select which orthoimage as the 
source image, we make further experiments on comparing the 
change detection results from two possible cases. The 
experimental results show that it is better to choose the darker 
orthoimage as the source image and the brighter one as the 
target image, based on the rule of fewer wrong detections and 
no influence on correct detections. 
To summarize the above analysis, we undertake the linear color 
transfer on the darker orthoimage to make it have similar color 
tone to the brighter orthoimage. That is to say, in the case of 
Figure 1, after illumination change adjustment, (a) and (c) are 
taken as the input data for the next step. 
3.3 Precise Location Difference Rectification 
To solve the remaining location errors after global location 
difference rectification by global matching method, we further 
propose the following local matching method. In this method, 
the rectification amount of location difference is computed for 
each local region. It mainly consists of three steps. 
Firstly, key points are extracted over the whole image by Harris 
corner extraction [6]. Note the threshold to select the key points 
is set relatively high to assure that only reliable corners are 
selected. We perform Harris corner extraction respectively in 
two orthoimages and select the one with more reliable corners 
for example ;, , as the benchmark image. 
Secondly, for each key point in the benchmark image ;,, we try 
to search for its matching point in ; by template matching. 
Since there is only small local difference after global 
rectification, the searching for the matching point is only carried 
out in the neighbourhood of each key point. The computing is 
similar to Function (1) of the global matching, only with the 
difference that the matching cost is computed in a template 
surrounding the key point, and the template is shifted in the 
neighbourhood of the key point. We further filter the pair of 
matched points with high matching cost. In this case, wrong 
matching pairs are removed, including cases like the pair 
including the corners from some noise in the benchmark 
orthoimage, or the corners from the moving object in the 
benchmark image, and so on. Through the experimental results, 
we find that after filtering, only the matching pairs of 
unchanging object corners are mainly left, like the corners of 
unchanging buildings. 
Thirdly, in each local block, the rectification amount is obtained 
by a voting scheme. All the remaining matched pairs are used 
for voting by the shift information between the two 
corresponding pixels. For each possible shift position in the 
shift range, the one with the highest votes is taken as the final 
rectification amount of this block, as described in Function (7). 
arg max ic. ) (7) 
me[-r,r],ne[-r,r] 
where C mn ^ number of votes for the shift position m, n 
Here the whole image is segmented into several same-sized 
non-overlapping rectangle blocks. The size of each block is also 
carefully set. Because if it is too small, the shift amount for 
rectification may be easily affected by the image details, while if 
it is too large, there is not quite much difference from the global 
matching method. Based on the experience, for the experimental 
image of 3000*2500, we set the block size as 500*500.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.