In: Wagner W., Székely, B. (eds.): ISPRS TC VII Symposium - 100 Years ISPRS, Vienna, Austria, July 5-7, 2010, IAPRS, Vol. XXXVIII, Part 7B
1.2.1 In-situ calibration: The in-situ methods of calibration
are purported to produce the best camera calibration results.
They are mostly used for calibrating large cameras that cannot
be easily calibrated in laboratories. The cameras are hence
calibrated while they are in operation. In-situ calibration
methods require an area (a calibration range) with a very dense
distribution of highly accurate control points. While
maintaining a high density, the control points in the calibration
range should be well distributed in the horizontal, as well as in
the vertical direction. A rigorous least squares block adjustment
based on the co-linearity equations, augmented by equations
modelling radial and decentring distortion (Eq. 5) can generate
accurate calibration parameters. The in-situ method requires
aerial imagery over a calibration range. Also, careful
maintenance of the calibration range is required, over the years.
The maintenance may include re-survey of the control points,
making sure they are undisturbed etc. All these factors can be
expensive and time consuming for the camera operators.
1.2.2 Precision multi-collimator instruments: The USGS
operates a multi-collimator calibration instrument located at
Reston, Virginia, USA (Light, 1992). The instrument is used to
calibrate film based cameras, and while digital cameras are
increasingly used, there are a number of photogrammetric
companies that still employ film cameras. The aerial camera is
placed on top of the collimator bank, aligned and focused at
infinity. Images that capture the precision targets located in
telescopes lens (of the multi-collimator) are taken. The
deviation of the measured image (x,y) coordinates from the
known (X,Y) coordinates forms the basis for solving for the
calibration parameters (Eq. 5).
1.2.3 Self calibration: Self calibration uses the information
present in images taken from an un-calibrated camera to
determine its calibration parameters (Fraser, 1997; Fraser 2001;
Remendino and Fraser, 2006; Strum, 1998). Methods of self
calibration include generating Kruppa equations (Faugeras et.
al., 1992), enforcing linear constraints on calibration matrix
(Hartley, 1994), a method that determines the absolute quadric,
which is the image of the cone at a plane at infinity (Triggs.
While there are many techniques employed by researchers
(Hartley, 1994; Faugeras et al., 1992), most of these do not find
solutions for distortion and principal point, as they are not
considered critical for Computer Vision. On the other hand, for
photogrammetrists, these are critical parameters necessary to
produce an accurate product at a reasonable price.
In this study, we will use self calibration techniques to
determine camera calibration parameters. Section 2 provides a
brief theoretical framework for calibration. It goes on to discuss
the design of two methods for self calibration used at the USGS,
and describes the experimental set-up. It introduces an
inexpensive method for calibrating small and medium format
digital cameras, with short focal length. Section 3 analyses the
results of calibration, and compares the results obtained from
the two methods described in Section 2. Section 4 presents the
conclusions and discusses future work.
2. CALIBRATION METHODOLOGY
2.1 Theoretical basis
The self calibration procedure described in this research is
based on the least squares solution to the photogrammetric
resection problem. The well known projective collinearity
equations form the basis for the mathematical model.
x-
m ii(X-X c ) + m 12 (Y- Y c ) + m 13 (Z-Z c )
m 3l( x ~ X c) + m 32( Y ~Y c ) + m 3 3(Z-Z c )
0)
y-y P
f m 2 ,(X-X c )+m 22 (Y-Y c ) + m 23 (Z-Z c )
(_ m 31 (X - X c ) + m 32 (Y - Y c ) + m 33 (Z - Z c )
In Eq. 1, (x ,y) are the measured image coordinates of a feature
and (x p , y p ) are the location of the principle point of the lens,
in the image coordinate system, f refers to the focal length and
m ,i m 12 m 13 '
m 21 m 22 m 23
is the camera orientation matrix. Since the lens
V m 31 m 32 m 33,
in the camera is a complex system consisting of a series of
lenses, the path of light is not always rectilinear. The result is
that a straight line in object space is not imaged as one in the
image. The effect is termed distortion. Primarily, we are
interested in characterizing the radial distortion and de-centring
distortion. Radial distortion displaces the image points along
the radial direction from the principal point (Mugnier et al.,
2004). The distortion is also symmetric around the principal
point. The distortion is defined by a polynomial (Brown, 1966;
Light, 1992).
5r = kjr 3 + k 2 r 5 + k 3 r 7 +...
r = ^( x “ x P ) 2 +(y-y P ) 2
kj,i = 1,2,3...aœ coefficien ts of the polynomial
The (x,y) components of the radial distortion are given by:
8Xj
s Yi
(3)
The second type of distortion is the decentring distortion. This
is due to the displacement of the principle point from the centre
of the lens system. The distortion has both radial and tangential
components, and is asymmetric with respect to the principal
point (Mugnier et al., 2004). The components of de-centring
distortion, in the x-y direction are given by