×

You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Title
Close-range imaging, long-range vision

s statistical
ions.
be detected
f 65 degree
mage pair),
manually
! degree.
tep towards
»rdinates of
he plane in
ct.
idely
nition
'onsensus: a
'e analysis
ACM, Vol.
tional
Vol. 33 part
and
grammetry
tation of
iterated
e Sensing,
bsolute
te Sensing,
v for wide-
iputer Vision
Society, pp.
L., 2000.
1ces of
ensing, Vol.
reo matching,
, pp. 754 -
urban
mputer
tereo
ns, British
425.
or
E. Shimizu
y and
tation
n a single
of
SW11, pp.
| a single
ra.
, Vol.
i| image
e Sensing,


KNOWLEDGE-BASED AUTOMATIC 3D LINE EXTRACTION FROM CLOSE RANGE
IMAGES
S. Zlatanova and F. A. van den Heuvel
Delft University of Technology, Department of Geodesy
Thijsseweg 11, 2629JA Delft, The Netherlands
Email: {S.Zlatanova, F.A.vandenHeuvel} @geo.tudelft.nl
Commission V, WG V/3
KEY WORDS: Object reconstruction, Edge detection, Feature-based matching, Topology, 3D Databases, Augmented reality
ABSTRACT:
The research on 3D data collection concentrates on automatic and semi-automatic methods for 3D reconstruction of man-made
objects. Due to the complexity of the problem, details as windows, doors, and ornaments on the facades are often excluded from the
reconstructing procedure. However, some applications (e.g. augmented reality) require acquisition and maintenance of rather
detailed 3D models.
In this paper, we present an automatic method for extracting details of facades in terms of 3D line features from close range imagery
The procedure for 3D line extraction consists of four basic steps namely edge detection, edge projection on one or more sequential
images, edge matching between projected and detected ones and computation of the 3D co-ordinates of the best-matched candidates.
To reduce the number of candidates for matching, we use the rough representation of facades (i.e. simple rectangles) obtained from
3D reconstruction procedures completed prior to the 3D line extraction. The paper presents the method, discusses achieved results
and proposes solutions to some of the problematic cases.
1. INTRODUCTION
3D data is becoming of a critical importance for many
applications in the last several years. Urban planning,
telecommunication, utility management, tourism, vehicle
navigation are some of the most appealing ones. The huge
amount of data to be processed, significant human efforts and
the high cost of 3D data production demand automatic and
semi-automatic approaches for reconstruction. The research on
3D reconstruction focuses mainly on the man-made objects and
more particularly the buildings. The attempts are towards fully
automatic procedures utilising aerial or close range imagery. A
lot of work has been already completed on this subject and the
progress is apparent. However, the efforts of most of the
researchers are concentrated on reconstructing the rough shape
of the buildings neglecting details on the facades such as
windows, doors, ornaments, etc. Depending on the application,
such details may play a critical role. A typical example is an
augmented reality application utilising a vision system for
orientation and positioning require both accurate outlines of the
building and many well visible elements on the facades. Here,
we present our approach for collecting 3D details on facades.
The research is a part of the interdisciplinary project UbiCom
carried out at the Delft University of Technology, The
Netherlands (UbiCom project, 2002).
Within this project, an augmented system is to be developed
that relies on a vision system for positioning the mobile user
with centimetre accuracy and latency of 2 ms (Pasman &
Jansen, 2001). The initial idea, i.e. utilising only an inertial
tracker, failed due to the rather large drift observed during the
experiments. The current equipment (assembled within the
project) is capable of positioning the user in the real world with
an accuracy of 5 m (Persa & Jonker, 2001). This accuracy
however does not suffice the requirements of the application
and therefore is used only for obtaining the rough location. The

accurate positioning is going to be completed by the vision
system, i.e. tracking features. Among the variety of tracking
approaches reported in the literature, we have concentrated on
tracking line features (Pasman et al., 2001). This is to say, the
accurate positioning is to be achieved by a line matching
algorithm between line features extracted in real time from a
video camera (mounted on the mobile unit), and lines available
in an a priory reconstructed 3D model (rough and detailed). The
approximate positioning (obtained by the inertial tracker and
GPS) provides input information to the DBMS searching engine
in order to obtain the 3D line features in the current field of
view. Figure 1 shows an example of such a vision system.
«> Improve feature positions




Reject bad
features






Good
features
Kalman filter
features
EEE
L 3 Track features
Camera
Accurate
position
Estimated cam 5
and feature position






a —
Approximate position
(Inertial tracker,GPS)
Figure 1: Typical setup of camera tracking system
The accuracy of the 3D model (rough and detailed) is the most
critical requirement. The extracted 3D line features need to
ensure decimetre accuracy to be able to suffice the rendering
requirements. Furthermore, the tracking system has to be able to
work at different times of the day and under different weather
conditions. Therefore only well visible elements have to be
available in the 3D model. This is to say that influence of
—233-
ER JL SI
i
3