Full text: Technical Commission III (B3)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B1, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
  
| 
  
  
9 
io 
ta £e . + 
Pa rés Be a 
o 
o 
& 
à 2 
© 08; N | $e. NM 
2 —E8- Loi-2 S TA. 
3 dicen LOI = | AS 
w 0.7 7$ LOot-3 | ss EM 
2 Léon | 5 | igi 
o 2 | 
S os. E 40 
3 2 | 
o z 
S 
o 
a 
2 
in 
5.5 
  
     
[ —9— Lor-2 = Lo | | 
~—&— LOI-1 | LOI-1 | A 
—#— LoI-3 SH —e— Loi-3 dl 
CRUS | 78" lterative Weak Perspective) 
| 45- 1 
fase | | 
©» y e 
| E 
1 € e 
| 2 pj 
35r 1 
B A : = | 
n | 
  
20 
  
30. 
2 
> 
ot. m 
14 16 
pose (5) 
15 20 25 
4 6 8 10 2 
The difference of initial 
(a) 
35 
Number of lines 
(b) 
40 45 
Number of lines 
(c) 
Figure 5: (a) the number of iterations and (b) the running times as a function of the number of lines. (c) the percentage of convergence 
when initial poses are generated from a multinormal distribution. 
The tracking is performed on an image sequence recorded from 
a moving calibrated camera pointing towards the scene as shown 
in Fig. 6. We implement a simple line tracker in a typical frame- 
work where a local search along the normal direction of a model 
edge line is performed for gradient maxima for a set of sampled 
points in the line (Wuest et al., 2005). The strong maxima are 
taken as the 2D feature points whose corresponding 3D sampled 
points are in the object line. At run time, the tracker generates 
a set of 3D-to-2D line correspondences among which outliers or 
erroneous ones exist. Robust pose estimation method that well re- 
sists outliers is evaluable for robust tracking. We use our method, 
LOI-2 specifically, for the tracking. Fig. 6 shows four frames of 
the tracked sequence. Our method consistantly tracks the whole 
sequence. 
5 CONCLUSIONS AND FUTURE WORK 
Robust pose estimation is necessary for refining the pose. We p- 
resented efficient and robust iterative pose estimation algorithms 
for line features. Our method introduces coplanarity errors and 
formulates objective function in the object space by employing 
orthogonal projections. In the same framework, three pose esti- 
mation algorithms are given and their performances are evaluat- 
ed. Compared with other pose estimation algorithm for lines, one 
of the proposed methods-LOI-2 algorithm is extremely robust, 
accurate, and also converges fast. 
For future work, we are interested in using our methods for real 
applications, for example, robot navigation. By making use of 
more other information like the appearance of object, the search 
of pose from unknown line correspondences may speed up. We 
are also interested in implementing our simultaneous pose and 
correspondence method on GPU, for real-time virtual reality ap- 
plications. 
REFERENCES 
Ansar, A. and Daniilidis, K., 2003. Linear pose estimation from 
points or lines. IEEE transactions on pattern analysis and ma- 
chine intelligence. 
Chen, H. H., 1991. Pose determination from line-to-plane cor- 
respondences: Existence condition and closed-form solutions. 
IEEE transactions on pattern analysis and machine intelligence. 
Christy, S. and Horaud, R., 1999. Iterative pose computation from 
line correspondences. Computer vision and image understanding. 
Dhome, M., Richetin, M., thierry Lapreste, J. and Rives, G., 
1989. Determination of the attitude of 3-D objects from a sin- 
gle perspective view. IEEE transactions on pattern analysis and 
machine intelligence. 
84 
Haralick, R. M., Joo, H., Lee, C.-N., Zhuang, X., VAIDYA, V. G. 
and Kim, M. B., 1989. Pose estimation from corresponding point 
data. International Journal of Computer Vision. 
Horaud, R., dornaika, E., Lamiroy, B. and christy, S., 1997. Ob- 
ject pose: the link between weak perspective, paraperspective, 
and full perspective. International Journal of Computer Vision 
22(2), pp. 173-189. 
Horn, B. K. P., Hilden, H. and Negahdaripour, S., 1988. Closed- 
form solution of absolute orientation using orthonormal matrices. 
Journal of the Optical Society America 5(7), pp. 1127-1135. 
Kumar, P., 1994. Robust methods for estimating pose and a sen- 
sitivity analysis. Image understanding 60(3), pp. 313—342. 
Lee, C.-N. and Haralick, R. M., 1996. Statistical estimation for 
exterior orientation from line-to-line correspondences. Image and 
vision computing 14, pp. 379-388. 
Liu, Y., Huang, T. S. and Faugeras, L. D., 1990. Determination of 
camera location from 2-D to 3-D line and point correspondences. 
IEEE transactions on pattern analysis and machine intelligence. 
Liu, Y., Huang, T. S. and Faugeras, O. D., 1988. A linear algo- 
rithm for motion estimation using straight line correspondences. 
Computer Vision Graphics Image Processing. 
Lu, C.-P., Hager, G. D. and Mjolsness, E., 2000. Fast and globally 
convergent pose estimation from video images. IEEE transaction- 
s on pattern analysis and machine intelligence. 
Moreno-Noguer, E., Lepetit, V. and Fua, P., 2007. Accurate non- 
iterative O(n) solution to the PnP problem. In: IEEE Conference 
on Computer Vision and Pattern Recognition. 
Navab, N. and Faugeras, O., 1993. Monocular pose determination 
from lines: Critical sets and maximum number of solutions. In: 
IEEE Conference on Computer Vision and Pattern Recognition. 
Phong, T. Q., Horaud, R., Yassine, A. and Tao, P. D., 1995. Ob- 
ject pose from 2d to 3d point and line correspondences. Interna- 
tional Journal of Computer Vision 15(3), pp. 225-243. 
Quan, L. and Lan, Z., 1999, Linear N-point camera pose de- 
termination. IEEE transactions on pattern analysis and machine 
intelligence. 
Umeyama, S., 1991. Least-squares estimation of transformatio 
parameters between two point patterns. IEEE transactions on pat- 
tern analysis and machine intelligence. 
Wauest, H., Vial, F. and Stricker, D., 2005. Adaptive line tracking 
with multiple hypotheses for augmented reality. In: 4th IEEE 
and ACM International Symposium on Mixed and Augmented 
Reality.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.