Full text: Proceedings (Part B3b-2)

s&MiâÉà 
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008 
video frames are preformed, and exterior orientation parameters 
for each video frames and 3-D coordinates of natural feature 
points are obtained. 
Figure 1 shows the flow of robust exterior orientation procedure. 
Capturing Video Image Video Camera 
Figure 1. Flow of Robust Exterior Orientation 
2.2 Least Median of Square Method 
Generally, corresponding points include error correspondence 
as outlier. Therefore, in order to perform the automatic rejection 
of outlier in corresponding points, the robust regression based 
on the LMedS method is performed for the tracking process and 
bundle adjustment process. 
Figure 2 shows the flow of tracking process and details are as 
follows. 
Figure 2. Flow of Robust Exterior Orientation 
2.3.1 Feature Point Extraction: At the first frame of video 
image sequences, many natural feature points are extracted by 
using MOVRAVEC operator and template image of each natu 
ral feature points for SSDA template matching is acquired si 
multaneously. 
2.3.2 Template Matching: SSDA template matching is per 
formed between first and second frame and temporary corre 
spondences of natural feature points are obtained. 
The LMedS method is one of the robust regression methods 
suggested by Rousseuw 1986 [7] . Classically least square 
regression consists of minimizing the sum of the squared 
residuals. Generally, a result of least square regression takes 
influence of outlier included in data greatly. In order to perform 
robust regression from data containing outlier, sum of the 
squared residuals in least square regression is replaced by the 
median of the squared residuals in LMedS method as following 
equation. 
LMeds =med(min(f. )) 
2.3.3 Detection of Outlier: The temporary correspondences 
of natural feature points generally include many error corre 
sponding points as outlier in main transformation between each 
video frames. In this system, the transformation between each 
video frame is approximated with affine transformation as fol 
lowing equation. 
u M =a,u i +a 2 v l +a, 
v M =a a u l +a 5 v,+a 6 
Where 
al~a6 = affine parameter 
(u, v) = image coordinates at frame i 
i = index of data (i = l~n) 
e= residual of data 
med( ) = median 
min( ) = minimum 
The unknown affine parameters are calculated using LMedS 
method with random sampling algorithm Figure 3 shows the 
flow of LMedS Method in this process and details are as 
follows: 
a) At the first, random sampling of corresponding points 
is performed. 
b) Affine parameters are calculated from selected 
corresponding points. 
c) Moreover, estimation of affine transformation model 
is performed with LMedS. 
d) These procedures are performed repeatedly, and 
affine parameter to minimize LMedS is selected. 
e) Finally, detection of outlier in corresponding points is 
performed by thresholding for residual of corresponding 
points.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.