The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008
654
mind, that optimal stationing of the sensor is not always
possible, especially in inner city areas, where traffic, property
boundaries and other circumstances limit the choice for sensor
stationing.
In colour texture acquisition it has been shown, that automatic
fusion of images acquired at multiple stations can minimize
occlusions (Böhm, 2004). However, the key to this approach, i.e.
redundant data acquisition, is only feasible since image
acquisition with close-range cameras is fairly inexpensive. This
approach is not recommended in terrestrial laser scanning, since
each station is associated with considerable costs. The same is
true for vehicle-based scanning systems, as either the number of
scanners mounted on the vehicle or the number of drive-bys
need to be increased.
We therefore investigate the exploitation of self-redundancy for
modelling facades from incomplete range data. This approach is
based on the observation that typically façades of building are
composed of repetitive patterns. This repetition of patterns can
be used to substitute incomplete areas of the range data. One
key question is how the structure of these repetitions can be
detected and how it can be efficiently encoded. In this paper we
propose a graph-based approach for the encoding of repetitions,
which is efficiently derived from key point matching. Before we
detail this procedure in section 5, we first give an overview of
related work in section 2. In section 3 we give the fundamentals
of our approach to façade modelling my LASERMAPs and in
section 4 we show how defective areas can be substituted for.
2. RELATED WORK
In the computer vision literature the situation of replacing
effective or occlude image parts has been dealt with extensively.
Prominent solution include inpainting, a technique which is
used to fill small gaps, typically by propagating linear structures
from the border of the defective area into the area. A second
approach is texture synthesis, which tries to copy repetitive
texture to fill an occluded image area. There are many
variations and also combinations of these methods, see for
example (Criminisi, 2004). Our approach is most similar to
exemplar-based inpainting methods. We are not aware of
approaches specific to depth maps or range images.
As mentioned earlier we have dealt with the case of occluded
colour texture images (Böhm, 2004). As the acquisition of
LiDAR data is inherently more time-consuming than image
acquisition, it is unrealistic to assume a highly redundant multi
station configuration to overcome occlusions, as it is possible in
colour texture image synthesis. However, the idea to have a
redundant description for one and the same area of an image
still holds. And thus the proposed techniques for robust fusion
can be transferred to LASERMAPS.
Self-similarity, repetitive structures and symmetry of buildings
and facades in particular have recently attracted great attention
in the research field (Müller, Wonka et al. 2006; Ripperda and
Brenner 2006). A number of successful applications were
developed which exploit these properties (Müller, Zeng et al.
2007). The aforementioned approaches use grammars to store
the repetitive pattern of elements. This approach has shown to
be successful for synthesizing complete façades and for creating
variations of facades. In our work we chose a different
representation scheme which is based on graphs. Our idea of
using self-similarity for substituting incomplete data is also
motivated by the work of (Pauly, Mitra et al. 2005). It differs in
Figure 2. A prismatic building model with detailed roof
structure and a registered point cloud acquired by ground-
based LIDAR.
that we select the examples used to fill the gaps from the dataset
itself rather than using a database of models.
The simple representation in the form of a LASERMAP enables
us to use likewise simple image processing operators to extract
structures. In order to detect repetitive patterns we use a similar
procedure as has been proposed in (Wenzel, Drauschke et al.
2007). It is based on feature point extractors typically used for
registration of separate data sets. Matching these feature points
within the same dataset detects self-similarity.
3. FACADE MODELLING USING LASERMAPS
In order to able to combine ground-based laser data with pre
existing building models, the data has to be registered. There
many possibilities to compute the registration, ranging from
direct georeferencing, to manual and automatic alignment. Our
approach to registration and georeferencing of terrestrial laser
data and virtual city models is given in full detail in
(Schuhmacher and Boehm, 2005).
For the rest of this paper we assume that the registration has
been computed and the range data is given with respect to the
same coordinate frame as the building model. This initial
situation of a point cloud registered to a building model is
shown in Figure 1. The dataset depicts the president’s office at
the Universität Stuttgart. The point cloud was acquired with a
terrestrial laser scanner, a Leica HDS 3000, from more than 15
stations. The data covers the facades of the building at a point
density better than 20 mm. The large number of stations was
necessary to minimize shadowing of occluding objects. By
removing selected stations, we can now control the
completeness or incompleteness of the data.
Our method to modeling facades is motivated by concepts for
modeling developed in computer graphics. In computer graphics
the duality of coarse over-all geometry and fine detail has long
been noted. The separation of the two is a fundamental
modeling principle. Starting with the observations of Blinn
(1978), that the effect of fine surface details on the perceived
intensity is “primarily due to their effect on the direction of the
surface normal ... rather than their effect on the position of the
surface”, modeling concepts were developed, which keep fine
surface detail separate as a perturbation of the normal direction
or a displacement to the underlying coarser geometry.