Full text: Technical Commission III (B3)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B1, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
ITERATIVE DETERMINATION OF CAMERA POSE FROM LINE FEATURES 
Xiaohu Zhang, Xiangyi Sun, Yun Yuan, Zhaokun Zhu, Qifeng Yu 
College of Aerospace and Materials Engineering, National University of Defense Technology 
Changsha, 410073, P.R.China - zxh1302 @ hotmail.com 
Hunan Key Laboratory of Videometrics and Vision Navigation, Changsha, 410073, P.R.China 
Commission III/1 
KEY WORDS: pose estimation, line feature, orthogonal iteration 
ABSTRACT: 
we present an accurate and efficient solution for pose estimation from line features. By introducing coplanarity errors, we formulate 
the objective functions in terms of distances in the 3D scene space, and use different optimization strategies to find the best rotation and 
translation. Experiments show that the algorithm has strong robustness to noise and outliers, and that it can attain very accurate results 
efficiently. 
1 INTRODUCTION 
Camera pose estimation is a basic task in photogrammetry and 
computer vision, and has many applications in visual navigation, 
object recognition, augmented reality, and erc. 
The problem of pose estimation has been studied for a long time 
in the community of photogrammetry and computer vision, and 
numerous methods have been proposed. Most existing approach- 
es solve the problem using point features. In this case, the prob- 
lem is also known as the Perspective-n-Point (PnP) problem (Har- 
alick et al, 1989, Horaud et al, 1997, Quan and Lan, 1999, 
Moreno-Noguer et al., 2007). 
Although the point feature is first used in pose estimation, line 
feature, which has the advantages of robust detection and hav- 
ing more structural information, is gaining increasing attentions. 
Typically, in the indoor environments, many man-made objects 
have planar surfaces with uniform color or poor texture, where 
few point features can be localized, but such objects are abundant 
in line features that can be localized more stably and accurately. 
Moreover, line features are less likely to be affected by occlusions 
thanks to multi-pixel support. 
Closed-form algorithms were derived for three-line correspon- 
dences but multiple solutions may appear (Dhome et al., 1989, 
Chen, 1991). Linear solution(Ansar and Daniilidis, 2003) was 
proposed for solving the pose estimation problem from z points 
or n lines. It guarantees a solution for n > 4 if the world objects 
do not lie in a critical configuration. For fast or real-time applica- 
tions, such closed-form or linear algorithms free of initialization 
(Dhome et al., 1989, Liu et al., 1988, Chen, 1991, Ansar and 
Daniilidis, 2003) can be used. In order to obtain more accurate 
results, iterative algorithms based on nonlinear optimization (Liu 
et al., 1990, Lee and Haralick, 1996, Christy and Horaud, 1999) 
are generally required. However, they generally do not fully ex- 
ploit the specific structure of pose estimation problem and the 
usual use of Euler angle parameterization of rotation cannot al- 
ways enforce the orthogonality constraint of the rotation matrix. 
Moreover, the typical iterative framework that uses classical Op- 
timization techniques such as Newton and Levenberg-Marquardt 
method may lack sufficient efficiency (Phong et al., 1995, Lu et 
al., 2000). 
One interesting exception among the iterative algorithms is the 
Orthogonal Iteration(OI) algorithm developed for point features 
80 
(Lu et al., 2000), which is not only accurate, but also robust to 
corrupted data and be fast enough for real-time applications. The 
OI algorithm formulates the pose estimation problem as minimiz- 
ing an error metric based on collinearity in object space, and it- 
eratively computing orthogonal rotation matrices in a global con- 
vergent manner. 
Inspired by this method, we present an accurate and efficient 
solution for pose estimation from line features. By introducing 
coplanarity errors, we formulate the objective functions in terms 
of distances in the 3D scene space, and use different optimiza- 
tion strategies to find the best rotation and translation. We show 
by experiments that the algorithm which fully exploits the line 
constraints information can attain accurate and robust results ef- 
ficiently even under strong noise and outliers. 
2 CAMERA MODEL 
The geometric model of a camera is depicted in Fig. 1. Let c — 
XcYczc be the camera coordinate system with the origin fixed at 
the focal point, and the axis z; coinciding with the optical axis 
and pointing to the front of the camera. I denotes the normalized 
image plane. o — xyyywzy is the object coordinate system. L; is a 
3D line in the space and /; is its 2D image projection on the image 
plane. It can be seen that the optical center, the 2D image line 
lj, and the 3D line L; are on the same plane, which is called the 
interpretation plane (Dhome et al., 1989). In the object coordinate 
system, L; can be described as Ad; + P;, where d; = (df, dl. di 
is the unit direction of the line and, P; = (x;, y;. zi)T isan arbitrary 
point on the line, and A is a scalar. The 2D image line /; in the 
camera coordinate system can be expressed as: a;x + b;y +c; = 0. 
We define a unit vector n; — (aj, bj,c;)! , which represents /; as 
(x,y.1)-m; = 0. It is clear that n; is the normal vector of the 
interpretation plane. 
The direction vector d; and the point P; can be expressed in the 
camera coordinate system as Rd; and RP; +t, where the 3 x 3 
rotation matrix R and the translation vector t describe the rigid 
transformation between the object coordinate system and the cam- 
era coordinate system. Since the two vectors are all in the inter- 
pretation plane, we have: 
n7 Rd; — 0, (1) 
n/ (RP; -- t) — 0. (2) 
The 
qual 
of t] 
den 
We 
resp 
et al 
In c 
et al 
com 
effic 
For 
rithi 
suffi 
199: 
tech 
may 
prot 
nois 
the 
base 
very 
ness 
3.1 
The 
resp 
are 
miz 
and 
Inst 
use 
ject 
in tl 
nari 
the 
shot 
the | 
tatic 
iden
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.