×

You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Title
Close-range imaging, long-range vision

M., High-
n Textures
cs and Ap-
Inc
e 2002).
de
298. Self-
of Varying
-. Interna-
"ublishing
999. Self-
f Varying
^rnational
). A sim-
ıl motion,
nputer Vi-
Gool, L.,
from se-
etry And
. Surviv-
and Mo-
Computer
ck, Proc.
n Recog-
[onocular
construc-
and Pat-
p. 1100-
etection,
'ersity of
problem
from un-
of Com-
bon, A.,
is, In B.
n Algo-
pringer-
M., Van
o Algo-
Journal

CAMERA CALIBRATION BY ROTATION
Petteri Póntinen
Helsinki University of Technology
Institute of Photogrammetry and Remote Sensing
P.O.Box 1200, FIN-02015 HUT, Finland
petteri.pontinen(g)hut.fi
Commission V
KEY WORDS: Calibration, Camera, Image, Parameters, Least Squares, Single Station.
ABSTRACT:
This paper describes a camera calibration method, which does not require any 3-D control data. The main idea of this method is to
rotate the camera around it's projection center and derive the calibration parameters based on the captured images. In the calibration
calculations, the camera constant, principal point coordinates and the lens distortion parameters are solved. The objective of this
study was to find out the feasibility of the method in practice. The presented method was tested with synthetic and real data. Due to
the simple mathematical model (only rotations between the images) the calculations converged to the correct solutions even though
very weak initial values for the unknown camera parameters were used. It can be concluded that under sufficient circumstances the
presented method is an alternative to the traditional test field calibration.
1. INTRODUCTION
Camera calibration by rotation has been studied also by several
other authors, but the focus of these studies and the
mathematical formulation of the problem have been different.
In many of these papers the process has been called single
station camera calibration.
In (Hartley, 1994) the camera calibration is based on the 2-D
projective correspondence which occur between two
overlapping images taken from the same point. The
correspondence between the image coordinates is
u;'= Pu, (1)
where u;' and u; are the homogenous image coordinate vectors
and P is a 3x3 matrix. Furthermore,
1
P; =KR;K , @)
where A; is a rotation matrix and K is the camera calibration
matrix. Matrix K is written as
ky $s py
K =} 0 kv PL (3)
0 0 1
where k, and k, are the scale parameters of the coordinate
directions, p, and p, are the coordinates of the principal point
and s is the skew parameter of the coordinate axes. Each
overlapping image pair gives one P,. In (Hartley, 1994) is
shown how the common matrix K can be solved based on all the
transformation matrices P;.
In (Wester-Ebbinghaus, 1982) the mathematical formulation of
the problem is close to the one presented in this paper. The
basic idea is to solve the rotations between the images and the
camera parameters based on point correspondences. The
fundamental difference between these two papers is that in
(Wester-Ebbinghaus, 1982) also the movement of the projection
center is modelled.
Also Duane Brown, one of the most famous photogrammetrists
on the field of camera calibration, has studied single station
camera calibration, and some of his concepts can be found in
(Fryer, 1996).
A clear advantage of the single station camera calibration is that
it can be performed without any known 3-D control points. And
because images have been taken from one point there aren't any
occlusions or big differences in lightning between the images.
This refers to a feasible starting point for automation.
The most difficult thing with the presented method is to keep
the projection center stable during the camera rotation. For this
study a special camera platform was constructed (see Figure 1),
which allowed the rotation both horizontally and vertically.
With the help of theodolites and levelling instruments the
camera was mounted to the platform so, that it's projection
center was as close as possible the intersection point of the
rotation axes. Of course it is impossible to get the camera
exactly to the correct place and that’s why the influence of the
non-concentricity needed to be studied. In simulations it turned
out that some care must be taken in rotating the camera, but
small deviation from the concentricity does not spoil the results.
There are also several other factors which affect to the final
calibration results, like the structure of the captured image set,
the distribution of the measured image points, the noise of the
image coordinate measurements, the choice of the distortion
model, etc. But because there are various numbers of these
factors and they are so dependent on each other, it was
impossible to study them all in detail. The studied factors were
the structure of the captured image set and the noise of the
image point measurements.
—585—