Full text: Technical Commission VII (B7)

  
3. METHODOLOGY 
The proposed framework for accuracy improvement of change 
detection aims at solving the problems of illumination change 
and location difference between orthoimages. After analyzing 
the mutual influence of the above problems, we design the 
following framework. 
Firstly, the global location difference between two orthoimages 
for change detection is rectified, which should be implemented 
at first especially when there exists overall large location 
difference. Secondly, the illumination change adjustment is 
carried out to make two orthoimages have more unified 
illumination condition. Thirdly, more precise rectification of 
location difference in local regions is performed, aiming at 
making the same object appear in the same location. 
This framework for accuracy improvement is implemented 
before the change detection as pre-processing, since the above 
three steps improve the quality of original input data for change 
detection. By implementing these steps to get more accurate 
input data, more accurate detection results and less processing 
time of change detection can be expected. 
3.1 Global Location Difference Rectification 
As it is stated in the above, change detection is carried out on 
both orthoimages and DSMs from different times. For both 
orthoimages and DSMs, the location difference between two 
datasets may interfere with the detection accuracy of final 
results. The location difference between two datasets happens 
due to various different factors. For example, the aerial 
triangulation data used for generating DSM from stereo images 
may have different accuracy levels for two datasets. Or there are 
different systematic errors coming from camera, aerial 
triangulation calculation, and stereo matching. 
Rather than analyze the above reasons of the location difference, 
we decide to directly analyze the two datasets to find out the 
location error between them. For the two datasets for change 
detection, in the wide area shown in the images, most of the 
parts are not changing. For example, according to [4], the 
percentage of changing buildings annually in one photograph is 
only 3% to 5% of the overall number of buildings in most cases. 
Based on this, from another point of view, it is possible to carry 
out matching on the unchanging parts in two datasets from 
different time. Compared with DSM data including only height 
information, orthoimages have more information with three 
color channels and more characteristics are able to be extracted. 
Therefore it is easier to carry out matching between two 
orthoimages than DSM data. Through image matching, we 
attempt to find the location difference between the parts in 
respective orthoimage that describe the same area in the real 
world. 
Basically, orthoimages are ortho-rectified results of original 
aerial images based on 3D information of DSM. Strictly 
speaking, after ortho-rectification, each pixel in the orthoimage 
corresponds to only one point in the real world with unique 
latitude and longitude. In this sense, in the two orthoimages of 
different time, two corresponding points should be in the same 
location. But due to the errors stated in the beginning of this 
section, there still exists some small difference between 
corresponding points. At the same time, the orthoimages input 
for change detection are already sampled to the same resolution 
to facilitate change detection. Based on the above analysis, we 
conclude on the rotation invariance and scale invariance for the 
two orthoimages. The location difference between them can be 
simply described as the shift in X and Y direction. 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
We utilize a global matching method to obtain the overall 
location difference for the whole orthoimage. The computing 
scheme is described as follows. For two orthoimages ;, and 7, 
with the same size, any of them can be selected as the 
comparison target image, for example; . Then initially we put 
1, and 7 totally overlapped with each other, and then shift; in 
both X and Y directions in a defined certain range, [-r, r]. For 
each shift position the matching cost is computed and the 
position with the minimum matching cost is decided as the 
global location difference between two orthoimages. 
Y lg m, j* n- GJ) 
arg min, — (D 
me[-r,r] N 
mn 
ne[-r,r] 
  
where g,(i, j) ^ intensity of pixel (; j) in 7, 
gl(i+m,j+n) = intensity of the pixel in shifted I, 
that corresponds to (j, ;) of 1, 
N m, "count on the pair of corresponding pixels in 
shift position of 71,7 
m,n = shift position of 7, 
[=r,r] = shift range of m and n 
Experimental results of the global matching method show that 
compared with original datasets the location difference over the 
whole image is reduced. Especially for the orthoimage showing 
relatively flat land, there is almost no location difference after 
the global rectification. In comparison, for the orthoimage 
including height changing landform, we find that for the regions 
with various altitudes, there are still remaining location errors, 
respectively different in each region. This phenomenon happens 
because for different altitude levels, the ortho-rectification 
amount on the original images is different, which results in 
different location difference. To solve this problem, a local 
matching method is further proposed later. 
3.2 Illumination Change Adjustment 
For change detection, the original images are taken under 
different conditions like the season, the weather and the 
shooting time in a day, which result in different illuminations in 
the images. Even for the same rooftop, its color may appear in 
quite different ways in the orthoimages of two different times. 
And this phenomenon leads to many wrong detections of color 
change. In order to solve this problem, we decide to unify the 
illumination of the two orthoimages for change detection. 
There have been many methods to analyze the illumination 
model of the sun for the aerial images according to the shooting 
season, the shooting time in the day and sometimes the 
characteristics of the rooftop material for light. To save the 
efficiency of the whole processing, rather than such complex 
model analysis, we decide to undertake the color transfer, to 
only adjust the color tone of one orthoimage to make it look like 
another orthoimage. 
Here we utilize the method of histogram matching [5], which 
adjusts each color channel based on the global image statistics. 
In details, for each channel, the following function is designed 
to transfer each intensity value in the source image to the target 
image. 
    
  
  
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.