ISPRS Commission III, Vol.34, Part 3A , Photogrammetric Computer Vision“, Graz, 2002
of local minima and maxima. The better the distinction of
local minima the higher the accuracy measure. Figure 2(c)
shows this accuracy measure where darker areas indicate
higher location accuracy. The two input images are shown
above.
Bucketing over the image is used to get a well distributed
point set out of all corresponding points. Therefore, the
images are divided into a number of regions and for each
region the point with the highest accuracy measurement is
used. These points are displayed in Figure 2(c) as crosses.
As shown in the image there are regions where it is not
possible to find correct corresponding points. Thus, only
a fraction (about 50%) of all regions are used. Those are
indicated by a larger cross.
So far our corresponding points still contain a few out-
liers. To get rid of them we use a robust estimator — the
RANSAC (RANdom SAmple Consensus) algorithm of Fis-
chler and Bolles [3]. Afterwards the fundamental matrix is
calculated with the accurate Gold Standard algorithm [13].
Using the fundamental matrix the minimum epipolar dis-
tance is calculated for each remaining corresponding point
and points with great distances are indicated as outliers.
Due to the fact that we work with almost planar facades the
observed corresponding points may lie on a plane which is
a critical surface for the estimation of the relative orien-
tation (see Horn [5] for a detailed discription of critical
surfaces). Thus, we distinguish for the estimation of the
relative orientation between two configurations. In the first
configuration the corresponding points are well distributed
in space. The solution for this case is explained in the fol-
lowing section. In the second case all points lie close to a
plane. This critical configuration is solved using homogra-
phies between the images.
Estimating the Relative Orientation using at least 5 Cor-
responding Points The inliers (shown as large crosses in
Figure 2(c) are used to determine the relative orientation.
We use the algorithm of Horn [5], who proposed to use
quaternions to depict the relative orientation. It is possible
to determine the relative orientation from the fundamen-
tal matrix directly. We do not recommend this widely used
procedure due to numerical instabilities like in the case that
the optical axes are almost parallel (the epipole lies at in-
finity).
Estimating the Relative Orientation from a Homogra-
phy A homography describes the projective transforma-
tion of a plane in 3D. As shown in Wunderlich [12] and
later reformulated by Triggs [11] it is possible to calculate
the relative orientation from a homography. Thus, we use
a RANSAC algorithm to find a subset of corresponding
points to estimate the homography.
3.2.2 Relative Orientation from Vanishing Points In
our second approach we do not need point to point correla-
tion in image pairs; instead we are using vanishing points
and line intersections.
The known position of vanishing points in the image are
used to extract all lines pointing to these vanishing points.
(b) Another view of the same building.
Figure 3: The color of the detected points of interest indi-
cates their category.
The extraction is based on a sweep line approach. In a pre-
processing step sub-pixel edgels are extracted using a low
threshold. The amount of edgels to be processed is reduced
by removing edgels with an orientation differing too much
from the orientation to the vanishing point. The sweep line
starts at the vanishing point and goes through the image
plane. All edgels within some perpendicular distance to
the sweep line are considered as inliers. For each densely
chained subset of the inliers a line segment is constructed
by computing a least squares adjustment over the inlier
points. Overlapping parallel segments are grouped after
the sweep. In a post processing step intersections for line
pairs from different vanishing points are computed. These
intersections serve as points of interest (POI) for the com-
putation of the relative orientation.
A set of POIs that belongs to two vanishing points can be
subclassified into 8 categories. We distinguish between 4
line formations that lead to a POI. These are left upper,
right upper, right lower and left lower corner. In addition
we are using the gradient information of each line. The
gradient information indicates which side of the line has
brighter pixels. E.g. for horizontal lines it indicates if ei-
ther the upper or the lower pixels are brighter. This infor-
mation can easily be determined for both lines that form a
POI by calculating the average direction of all edgels that
belong to the line. Thus we have another 4 reasonable pos-
sibilities and therefore in total 8 categories. In Figure 3
A - 189