Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B1-1)

125 
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part Bl. Beijing 2008 
2.4 Keypoint detection in scale-space 
Images can be expressed with different scales, and with the 
increasing of the scale parameter, images get smaller. In other 
words, a small scale is corresponds to details of image features 
and a large scale is for profiling characteristics. 
The scale-space image is defined as the convolution of a 
variable-scale Gaussian with an input image 
L(x, y, S) = G(x, y, ô) * I(x, y) (3) 
where (x,y) are pixel coordinates of the image and 8 is scale factor. 
DoG is computed from the difference of two nearby scales 
separated by the constant multiplicative factor k: 
D(x, y, Ö) = (G(x, y, kô) - G(x, y, S)) * I(x, y) (4) 
The DoG function is similar to the scale-normalized LoG. 
Every keypoint has information including location, gradient 
magnitude and orientation. The scale of the keypoint is used to 
select the Gaussian smoothed image, L, with the closest scale, 
so that all computations are performed in a scale-invariant 
manner. For each image sample, L(x, y), at this scale, the 
gradient magnitude, m(x, y), and 0(x, y), is precomputed using 
pixel differences: 
”<x,y)=sl(L(x+\, y)-L(x-l,y)) 2 +(L(x,y+l)-L(x,y-l)f (5) 
0{x,y)=tm\(IJix,y+l)-Ux,y-l)mx+iy)-Lix-iy))) (6) 
2.5 Produce local image descriptor 
A keypoint descriptor is created by first computing the gradient 
magnitude and orientation at each image sample point in a 
region around the keypoint location. These are weighted by a 
Gaussian window, indicated by the overlaid circle. These 
samples are then accumulated into orientation histograms 
summarizing the content over 4 by 4 subregions, as shown in 
Fig. 2, with the length of each arrow corresponding to the sum 
of the gradient magnitudes near that direction within the region. 
Fig. 2 shows a 2 by 2 descriptor array computed from an 8 by 8 
set of samples. 
2.6 Keypoint match 
The best candidate match for each keypoint is found by 
identifying its nearest neighbour in the database of keypoints 
from the master image and slave image. The nearest neighbour 
is defined as the keypoint with the minimum Euclidean distance 
for the invariant descriptor vector. 
When comparing the distance of the closest neighbour to that of 
the second-closest neighbour, if the ratio between them is less 
then a threshold, we choose the keypoint as a matching point. 
The smaller the threshold is, the less matching points we get, 
and the more reliable the matching points are (Lowe, 2004). 
2.7 Rectification and Registration based on TIN 
In this part, we create the Triangulated Irregular Network (TIN) 
by the minimum distance method (Li, et al., 2006) in the two 
images. Each of the large numbers of triangles has three tie 
points Xi ,Yi ),( X' i ,Y' i ) ,i = 1 ,2 ,3 ,which can be used to 
calculate affine parameters: 
X' = aO + al X + a2 Y 
Y' = bO + bl X + b2 Y (7) 
Three points can get six equations, then we can calculate 
aO ,al ,a2 and bO ,bl ,b2 with which we can correct the A p' 1 
P' 2 P' 3 on the slave image to API P2 P3 on the master 
image. The process of rectification is as follow in Fig. 2. (Liu, 
et al., 2007) 
Fig.2. the process of resampling 
3. RESULTS 
In this paper, two ERS images of Tianjing (the obtaining time 
of the images are 199710190253, 199710180253) are used, 
one for master image and other for slave image, to test the 
method. SIFT algorithm was used to detect the keypoints in the 
images which is shown on Fig.3. 
These keypoints are matched with the method above, which is 
shown on Fig.5. Fig.4 illustrates the matdhing points with the 
threshold of 0.75. Finally, the slave image is rectified based on 
TIN. The two point linked by a white line are the matching 
points.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.