In: Paparoditis N.. Pierrot-Deseilligny M.. Mallet C.. Tournaire О. (Eds). 1APRS. Vol. XXXVIII. Part ЗА - Saint-Mandé, France. September 1-3. 2010
161
of the vanishing points are mutually statistically independent, we
have the full statistical information from all relevant line seg
ments.
We now treat the coordinates (x y , E x . x; ) of the vanishing points
achieved from the individual detection, boosting and estimation
as observations, which need to be corrected. Based on approx
imate values x", which in the first iteration are identical to x y .
we again obtain x y = Xj + v 7 = x° + Ax ? -, j = 1,2,3
to fulfill the orthogonality constraint. The model for enforcing
the three orthogonality constraints is g(x,) = 0 with g\ —
xlx 3 , g-2 — xjxi, 93 = x|x2. After reducing the obser
vations x r j = Jx (x 7 ) x_,, j — 1,2,3 in order to be able to
handle the singularity of the covariance matrices S x ., . w'e ob
tain the reduced model g(x r -i) = 0 with g\ = x J r2 Xrz = 0.
g-2 = xj 3 x r i = 0, gz = xJiX r 2 — 0. The linearized model
therefore is c g (x a ) -+ Bj.Ax r = 0 or explicitely
xf x;
xf Scf
xi' T X2
+
0'
The reduced covariance matrices E“ .... of the observations are
derived following (6) and (8), first transforming the rotation into
the approximate point and second transforming them to the north
(or south) pole and omitting the third, the zero component.
Minimizing Q = Y''
v,
v r i under the three
1 rj {^X r jX r j) v rj
constraints yields the classical solution for the update for the
fitted observations Ax r - Y% rjXrj B r (Bj'£ XrjXrj 8 r ) -1 (c s +
Bj.x r ) + x r - They are used to obtain improved approximate val
ues for the fitted values of the vanishing point coordinates, as in
(24). In spite of the low redundancy of R = 3 it useful to deter
mine and report the estimated variance factor So = Q/3.
5 EXPERIMENTS
5.1 Used data
We perform two tests, one using uncalibrated images for inves
tigating reliabity of the vanishing point detections and a second
using partially calibrated images, where the principal distance is
known.
In both cases we automatically derive straight line segments.
They are represented by their centroid xq [pel], their length l
|pel], their direction 0. This allows to give the stochastical prop
erties by the standard deviation cr q of the centroid across the line
and the standard deviation <7$ of the direction. They are derived
from a ML-estimation using the edge elements and are approxi
mately
1 / 12
&e ? — A/ iQ 7 ® e (25)
Vi
i 3 -i
where <j e is standard deviation of an edge element, which de
pends on the manner of subpixel positioning and always is
smaller than the rounding error l/\/T2 [pel]. In our context
mainly the angular accuracy is relevant. Using the techniques
described by Meidow et al. [2009] the spherically normalized co
ordinates 1 := I s of all line segments together with their singular
covariance matrix En are determined. For testing we always take
a high significance level of S = 0.9999. We employ the adaptive
determination of the number of trials in the RANSAC procedure
as described by Hartley and Zisserman [2000].
The processing time for each image is in the order of a few' sec
onds. including the edge detection program, written in C, and the
vanishing point detection, using non optimized Matlab code.
5.2 Detecting vanishing points
The first group of experiments addresses the quality of the van
ishing point detection. We evaluate the detection procedure on
three levels of accuracy.
Visual evaluation. First w'e check the reliability of the vanish
ing point detection. We downloaded 140 Google images named
’building’ or 'batiment’ (cf. Fig. 3) with a minimum side length
of 768 pixels. No interior orientation is known, some images
are images sections, some are graphics, some show significant
lens distortion. For each image we visually identified the num-
1
Ax r \
' 0 '
Ax v 2
=
0
-
Axr3
0
КЗ
V
Figure 3: 12 of 140 building images, taken from Google (’build
ing’, ’batiment'). Such images are used for evaluating the van
ishing point detection.
her of vanishing point, a human could find, and - by inspecting
the color coded line segments and the directions to the vanishing
points - the number of correctly found vanishing points. The re
sult is showm in the table 1. From the 102 images, w'here 3 or
more vanishing points could be detected, in only 7 images the
system found only one vanishing points, whereas in 28 images
two vanishing points were detected. From the 95 images, mostly
with facades, where only tw'o vanishing points could be detected,
in 90 % the system could find both vanishing points. In three
images no vanishing points could be detected even by a human.
This is coherent with the experiment on the eTRIMS-data base
[ Korc and Forstner, 2009]. where in all 60 images of facades both
vanishing points could be detected.
0
1
2
3
4
5
6
0
3
0
0
0
0
0
0
1
0
0
3
4
0
0
0
2
0
0
92
25
2
0
I
3
0
0
0
65
4
0
1
Table 1: Horizontal: number of vanishing points a human could
detect. Vertical: the number of vanishing points correctly de
tected by the algorithm. 140 Google images and 60 eTRIMS
images.
The software gives an internal estimate for the accuracy of the
vanishing points. We compared this with the number of line
segments supporting a vanishing point. In the Google data
set of 40 images on an average 90 lines supported a vanishing
point, the mean standard deviation of the direction is appr. 0.3°.
On an average we obtain an internal estimate for the accuracy
ad ~ 2.5 0 /y/ri; which is a lower bound for the real accuracy.
Of course, no check on the orthogonality of the vanishing points
could be performed as the intrinsic parameters are not available
for these images.