Full text: Technical Commission III (B3)

ıme XXXIX-B1, 2012 
ion 
| Ej (R) is minimized firstly 
s then used to minimize the 
ine the translation t. This 
ne direction to compute ro- 
istraints effectively. In the 
anslation separately, the s- 
are amplified into large er- 
1994). To fully exploit the 
orithm 1 to optimize alter- 
nslation vector. 
stimation of Ry, t; is com- 
otation estimation iterative 
/ rotation value R’;, 1, and 
kx1 Via £(R'j,4) from Eq. 
e the method of (Lu et al., 
imation by minimizing the 
ep is described as follows. 
2 and t are iteratively opti- 
objective function defined 
RP; +t)[[”. (19) 
when applied to a point, 
line of sight defined by the 
btained, the next estimate 
owing absolute orientation 
Ht— V;ak|?, (20) 
rientation problem is then 
88). 
q. (9) compared with Eq. 
nstead of V;. Both projec- 
operties (Sect. 3.1). Hence 
! c1 i1), an estimate of 
rectly using the algorithm 
bjective function (9). The 
ithm 2. 
for both R and t, purely 
nethod of (Lu et al., 2000) 
t as LOI-3). Since the only 
projection vector, we will 
ore details, the readers are 
NTS 
experiments on both syn- 
nly within a cube defined 
5] (Fig. ??) in the object 
then created by linear fit- 
ing points in the 3D lines. 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B1, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
  
Algorithm 2: Alternative Optimization 
  
1. Given N (N > 3) 3D-to-2D line correspondences and initial 
rotation Ro, compute: K; for i — l,....N, 
B= (di, d»... dy). Set k z: Q. 
2. Perform the following steps: 
(a) Compute A — (Kj Rid;,:-- , Ky Ridy). 
(b) Compute M — AB” and perform SVD: UDV" = M. 
(c) Compute R, 4107 USVT, where S is set according to 
Eqs. (15) and (16). 
(d) Compute t, , , = (R4) 
(e) Given R, +; and i 4.1: compute Ry, ; using the 
algorithm of (Lu et al., 2000). 
(f) Compute tt, = 1(Re+1)- 
(g) Terminate the iteration if convergence is attained; 
otherwise, k = k+ 1, go to step (a). 
  
We add Gaussian noise to the projections of points and also con- 
sider a percentage po,; of outliers, for which a set of 3D lines are 
randomly selected and replaced by another line generated within 
the cube [—0.5,0.5] x [-0.5,0.5] x [-0.5,0.5]. For each setting 
of the control parameters in every plot, the result is obtained by 
running 1000 trials and the mean value is recorded. To facilitate 
the description, we denote the Algorithm 1 ^ 3 as LOI-1, LOI-2 
and LOI-3 separately. 
In Fig. 3, we plot the rotation and translation relative errors pro- 
duced by the three algorithms as a function of Gaussian noise 
with its standard deviation varies from 1 to 10 pixels. The num- 
ber of sampling points that are used for creating the 2D lines is 
set as 100. The line number is fixed to be 8 and the percentage 
of outliers pou = 0. The plots show that "LOI-2" is consistently 
more accurate than the other two algorithms. 
  
Rotation relative error 
  
  
1 2 3 4 5 6 7 8 
Gaussian image noise (pixels) 
  
Translation relative error 
  
  
  
  
1 2 3 4 5 6 7 8 9 10 
Gaussian image noise (pixels) 
Figure 3: Relative rotation and translation error as a function of 
image noise when the number of lines is fixed to be 8. 
Fig. 4 plots the errors as the function of the number of 3D object 
83 
lines when the image noise is fixed (0 = 3 pixels). We compare 
the performances of our algorithms with the OI method (Lu et 
al., 2000), for which the corresponding 2N endpoints of the 3D 
lines are used. It can be seen that all these algorithms can achieve 
higher accuracy when the number of feature correspondence in- 
creases. LOI-2 and OI algorithms show more accurate and stable 
performances. 
  
1 
  
& —E— O02 
0.9F —Ééc— LOI-1H 
\ -7€-— LOI-3 
\ —>- I 
o 
œ 
  
  
  
Rotation relative error 
  
  
  
  
  
  
  
  
  
  
  
= 
o 
E 
o 
o 
= 
= 
S 
D 
= 
= 
.2 \ 
a 
E S 
o N 
c : 
© i 
= = 
To. 
ne > A ees 
dE $e A 
ET Lg 
0 1 : 
5 10 15 20 
  
Number of 3D object lines 
Figure 4: Relative rotation and translation error as a function of 
the number of object lines when the standard deviation of image 
noise is fixed to be 3 pixels. 
In Fig. 5(a), we give the percentage of convergence when the 
initial poses are generated from a multinormal distribution with 
mean as the true pose and the diagonal covariance dX, where 
the standard deviation element of X is about 1.5 degree for the 
rotation angles, 0.2 for the x and y components of the translation 
t, and 0.5 for the z component. ó varies from 1 to 20. The plot 
indicates that LOI-2 shows very robust performance and slightly 
outperforms the OI algorithm which is proved to be global con- 
vergent. In contrast, the LOI-3 algorithm produces very poor per- 
formance. We conclude that, without exploiting the direction in- 
formation of the lines, LOI-3 is very sensitive to the image noise 
as well as to the initial pose. Fig. 5(b) plots the number of it- 
erations as the function of the number of object lines. With the 
increase of the line number, the number of iterations needed de- 
creases. Since in LOI-2 there are two updates of rotation, the 
computation time taken by 1 iteration in LOI-2 is about double 
of that in LOI-1 and LOI-3. This can be seen from Fig. 5(c), 
which gives the computation times. The LOI-1 and LOI-2 use 
almost the same running times. LOI-3 is faster with the increas- 
ing of the number of lines. We compare our methods with the 
iterative weak perspective (IWP) method (Christy and Horaud, 
1999), which estimates a pose with a weak perspective camera 
model and improves the estimation iteratively by solving an ap- 
proximate system of linear equations. Our orthogonal iteration 
methods are very efficient and comparable to the IWP method. 
4.2 Real Data 
We also validated our pose estimation approach for line corre- 
spondences, by using the algorithm for 3D line object tracking. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.