International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
Test data 2
Area with high
buildings.
Size: 3000*3000
Test data 3
The normal area
ly Size: 4096*4096
Figure 5 three test data set.
The time cost of each processing step is showed in table 1.
Table 1 processing time
Processing step Spending time (ms)
Datal | Data2 | Data3
1. Wallis filter 672 1321 |. 2321
2. feature extraction (Harris) 122 231 321
3. calculate left image vector | 32 35 37
(10*10 points)
4. calculate right image vector | 753 1424 | 2531
(all pixels)
5. find potential correspondences | 12 12 11
6. remove error points by GHT 11 11 12
Total 1602 | 3034 | 5233
Table 2 processing time by SIFT
Spend time (ms)
Datal Data2 Data3
Total 42304 93872 244387
(Download code form
http://www.cs.huji.ac.il/~ofirpele/SiftDist/code ,
Compiling by Microsoft Visual V++ 6.0 )
The matching results are showed in Figure 6.
Figure 6 Matching Result of The Method
As can be seen from the results, the image matching
algorithm based on the rotation vector field can indeed solve the
problem of image rotation , and the computing speed is much
faster than the traditional template matching algorithm in which
cach pixel is involved in complex operations ( such as
convolution).
As we can see, the method can achieve good matching points
which can be used lately as the initial-value for the accurate
matching.
4. CONCLUSION
According to the above experiments, conclusions can be
drawn as following.
Firstly, the matching algorithm based on rotation vector field
is an optimized method which greatly reduces the computation
and improves the matching speed.
Secondly, the algorithm can get good results in regions with
rich features and few repeated features. And it’s not suitable for
regions that with poor feature or have a large number of
repeated features.
Thirdly, the accuracy of the matching points obtained by this
method is not that high. And these points can’t directly
participate in photogrammetry processing but can be used as the
initial-value for the accurate matching.
Our further research will focus on the scale-change problem
in the image matching which hasn’t been solved yet.
REFERENCES
DAVID, G. LOWE. 2004. Distinctive Image Features from
Scale-Invariant Keypoints. International Journal of
Computer Vision, Vol. 60, No. 2, pp. 91-110
Herbert Bay, Tinne Tuvtellars and Luc Van Gool. 2006. SURF:
Speeded Up Robust Features. European Conference on
Computer Vision. pp. 404-417
Zhang Li, Zhang Zuxun,Zhang Jianging. 1999.The Image
Matching Based on Wallis Filtering. Journal of Wuhan
Technical University of Surveying and Mapping, Vol, 24
No.1 ,pp. 24-26
Sim D G, Kim H K, Oh D I. 2000. Translation, scale, and
rotation invariant texture descript or for texture based image
retrieval .IEEE Int Conf Image Process, No.3, pp. 742- 745
Farhan Ullah, Shunichi Kaneko. 2004. Using orientation codes
for rotation invariant template matching. Pattern
Recognition , 37, pp. 201- 209
Yan Ke, Rahul Sukthankar. 2004. PCA2SIFT: A More
Distinctive Representation for Local Image Descriptors.
Proceedings of the 2004 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition,
pp. 506-513
120