Full text: Technical Commission III (B3)

ime XXXIX-B3, 2012 
NGULATION 
mail.ncku.edu.tw 
)901@msn.com 
'ell-known scale invariant 
vith high end lap between 
Therefore, automation on 
thus developed to perform 
he need on the input data 
veness of the method for 
both schemes of Quality 
n and measurement with a 
ra in Taiwan. Also, high 
nsity of 3D object points 
fficiently even under the 
hat the accuracy of photo 
1 in the field of computer 
nding pixels in a pair of 
he image orientation data 
rlapping are known. In 
e orientations and object 
lock adjustment, which is 
quisition (Heipke, 1997). 
omation of modern aerial 
ition, this paper proposes 
1se point cloud matching 
information. 
D 
ure into account, a scale 
ed for automatic tie point 
m scale invariant feature 
belongs to the class of 
s two main processing 
ypoint matching (Lowe, 
Gaussian filtering and 
" Gaussian) at different 
eme values. Those pixels 
| keypoints, described by 
128 dimensional vector. 
calculate the Euclidean 
or on the left image to 
ght image, i.e. a pair of 
o (the shortest Euclidean 
one) is smaller than the 
(d. Thus, the one on the 
ne on the right image. 
int on the left image fails. 
th more than one author. 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
The matching and searching operations will be done repeatedly 
until all keypoints are processed. 
The proposed dense matching strategy is illustrated in Figure 1. 
Without the need on priori knowledge on image overlap 
information, the first step is to process SIFT keypoint extraction 
to obtain the location (abbreviated as Loc.) and descriptor 
(abbreviated as Des.) of each kepoint on P input images (P Z 
2). The loop number equals to P, namely the number of input 
images. Step 2 will be the keypoint matching for C? pairs of 
images, and one image pair at each time. Then, the result table 
of each image matching pair stores locations (of matched points) 
and numbers of the left and right image for every image 
matching pair. And Step 3 will be matched point connection via 
comparing the locations of matched points, rearranging and 
coding all the matched points into numbered result, eventually. 
    
DE Esai 
p 
Figure 1. Dense point d matching strategy 
Figure 2 illustrates the format of temporary tables of matched 
point connection. Every table of single image matching pair's 
result contains locations (row, column) of matched points on 
left and right image, denoted by (ri, cj) and (rg, cg). And the 
table of connection result stores location (r,c), point number 
(PN) and index value for every tie point in each image, which is 
done by means of location matching using the result table of 
image matching pair. The index value is used for descriptor 
inquiry, namely to inform that the descriptor belongs to the i-th 
keypoint on the j-th image. Therefore, the numbered tie points 
are connected, if their |Ar|«10 pixels and |Ac|«10 ^ pixels, and 
the repeated measurements are eliminated at this step. 
   
Figure 2. Connection of matched points 
  
  
  
  
  
  
  
Figure 3. Method of key point extraction for a large image 
Furthermore, in order to increase the operational efficiency 
especially for large format image of m x n pixels, e.g. m-12096 
and n-11200 for our test aerial images, the input image is first 
divided into small sub-images, as shown in Figure 3. Taking the 
capacity of the core processing programs executed on the 
adopted PC into account, each sub-image of the size 1800 rows 
x 1800 columns is used in this study. Then, key points are 
69 
extracted in each sub-image. All key points extracted in all sub- 
images are then merged together to output the results of key 
point extraction for the original input image of large format. 
2.2 Quality Filtering (QF) 
The extremely huge number of key points limits the efficiency 
of matching a large number of aerial images with large image 
format. In order to reduce the runtime, quality filtering (QF) is 
attempting to reserve those key points with best image quality. 
The standard deviation G, of gray levels, computed by Eq.(1), 
of every keypoint is computed in a local image window of 15 x 
15 pixels centered at the keypoint. Generally, Gg stands for the 
contrast of the keypoint image. In case of less noise, it also 
indicates the amount of texture information (or so-called quality) 
on the keypoint. 
  
(DH 
G.= Tn -G) 
where G,.- the gray value of the (r, c)-th pixel 
G = the average of gray values in a 15x15 window 
Assuming that the indicator values Gs of all keypoints in one 
image are normally distributed, the threshold for the selection 
of those best key points will be set to their mean plus standard 
deviation of overall indicator values in one image. Thus, only 
about 16% key points are reserved for later matching. 
Apparently, QF uses a heuristic filtering step based on the 
standard deviation of gray-levels to throw away weak keypoints 
in uniformly distributed individual sub-images. Since the 
indicator value of QF is changeable and the threshold is 
adjustable, the goodness and availability of the setting will be 
verified by the tests. 
  
Figure 4. Two functions of AFTP: overlap estimation(left), and 
searching window prediction (right) 
2.3 Affine Transformation Prediction (AFTP) 
This method uses AFTP to estimate the overlap area, and to 
predict the location of searching window, as shown in Figure 4. 
Instead of using original high resolution images, AFTP uses 
higher layer images with less number of key points in image 
pyramid to perform a fast pre-matching to maintain the 
efficiency and determine the necessity of follow-up process 
simultaneously. The image size at the top level is assumed to be 
about 700 x 700 pixels. If the six affine transformation 
parameters of an image matching pair can be calculated with a 
proper accuracy by means of least-squares adjustment (LSA), 
then these two images are overlapped. The locations of their 
corresponding image points are approximately described by the 
affine transformation parameters, which can also be utilized for 
prediction of searching window. Otherwise, this image 
matching pair has no overlap or rare overlap, and it will be 
skipped in the follow-up matching process. As long as better 
overlapped image matching pairs are processed, the tie points 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.