CIPA 2003 XIX th International Symposium, 30 September - 04 October, 2003, Antalya, Turkey
orientation and object reconstruction. The object reconstruction
is not discussed in this paper because it is presented in
(Hrabacek and Heuvel, 2000). For camera calibration five
images were selected. Figure 1 gives an overview of the images
used in the experiments.
1.3. Structure of the paper
The paper is split in two main parts. First the camera calibration
is discussed in section 2, and then the relative orientation of the
four corner images is presented in section 3. These two sections
start with a subsection in which the approach is briefly
explained, followed by the results of the experiments using the
images of the CIPA reference data set. Conclusions are
presented in section 4.
2. CAMERA CALIBRATION
2.1. The method for camera calibration
The method for camera calibration is summarised in the
following three steps (Heuvel, 1999a):
1. Extraction of straight image lines
2. Detection of the object orientation of the image lines
3. Estimation of camera parameters from parallelism and
perpendicularity constraints on image lines
The first two steps can be performed manual as well as
automated. In the latter case, a line-growing algorithm is used
for image line extraction (Heuvel, 2001), and a vanishing point
detection method is applied for the detection of the three
dominant object orientations (Heuvel, 1998). The quality of the
estimated parameters is dependent on a correct vanishing point
labelling of image lines performed in step 2. With the camera
parameters unknown, the automatic detection of the three
vanishing points that correspond with edges of the three
orthogonal object space orientations is more critical than when
using a calibrated camera. Two factors play a role. First,
unknown lens distortion hinders the straight line detection and
prohibits the detected lines intersecting in one point in image
space. Second, unknown focal length and principle point make it
impossible to limit the search space after detection of one or two
vanishing points. As a result each vanishing point is detected
independent of previously detected vanishing points. The
procedure below is applied to five images of the CIPA data set.
The lens distortion is determined first, followed by the other
parameters of the camera model, i.e. focal length and principle
point.
1. Start with vanishing point detection for those images that
contain only one façade of the building. For these images
only two vanishing points are to be detected, one for the
vertical object orientation, and one for the horizontal
object edges.
2. If only images with two (presumably orthogonal) façades
are available, only the vanishing point that corresponds to
the vertical object orientation is detected and its lines used
for estimation of the lens distortion. This approach
assumes limited camera tilt and rotation around the optical
axis.
3. Estimate lens distortion using the detected and labelled
lines of at least one, but preferably more images. In the
next steps the estimated lens distortion is eliminated from
the observations.
4. Detection of three vanishing points. Three-point
perspective imagery (for example image 10 in Figure 1) is
required for the estimation of the focal length and the
principle point. When the optical axis is nearly horizontal
and thus a one-point or two-point perspective remains (see
the images in the bottom row of Figure 1) the principle
point location in horizontal direction (camera x-axis)
cannot be estimated, or only with very low precision.
5. Estimate focal length and principle point using the
detected and labelled lines of at least one but preferably
more images.
2.2. Camera calibration using the CIPA data set
Image lines are extracted for all the selected images of the CIPA
data set. For the line-growing algorithm used for straight line
extraction two parameters were set. First the parameter for the
minimum gradient strength was selected in such a way that most
of the characteristic features of the building - especially the
windows - were extracted. The second parameter is the
minimum length in pixels of an extracted image line. This
parameter was fixed to 30 pixels for all the images used for
camera calibration.
Lens distortion
Detection of two vanishing points was performed on two images
(numbers 8 and 9) that show only one face of the building. This
is step 1 of the procedure described in the previous section. The
result is shown in Figure 2. In this and following figures the
image lines are colour-coded using the line labelling results of
the vanishing point detection.
Figure 2 : color-coded image lines of the vanishing point
detection for image 8 (top) and 9 (bottom).
For the estimation of the lens distortion (parameter kl)
parallelism condition equations for the lines in images 8 and 9
have been used. The estimated value for kl is -0.570 xlO' 3 . This
value is 7.0 times its estimated standard deviation and thus