eration of a
his purpose
on of gross
rocess. The
weight for
y that the
individual
points are
( from the
to its filter
eached (all
of iterations
nodel and a
oints.
ım are the
3). For the
sidering an
easurement
gross errors
1 12). The
its “return
n is a bell
etection in
h value (h)
nally it can
in order to
n respect to
ft and right
the weight
1elp of two
in distance
the DTM
d that the
nd without
off-terrain
id. rejected
or the filter
e found in
on a “good
erefore this
hich occur
we use the
| similar to
the help of
rm that we
irse-to-fine
the coarse
finer level
ists of all
sed instead
ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision", Graz, 2002
i
pe |
| |
|
|
Figure 1: Sequence of the hierarchic robust interpolation
d)
a) Creation of a data pyramid, small points: original data, thick points: data pyramid (lowest point in a regular 5m interval).
b) DTM generation in the coarse level by robust interpolation, the remaining point on the house is eliminated with an
asymmetric and shifted weight function. The surface in the first and last iteration is shown.
c) Coarse DTM with a tolerance band, all original points within the tolerance band are accepted.
d) DTM generation in the fine level by robust interpolation using an asymmetric and shifted weight function. Again, the first
and the last iteration is shown.
The hierarchic robust interpolation proceeds like the following:
1. Create the data pyramids with the lower resolutions.
2. Apply robust interpolation to generate a DTM, starting at
the coarsest level.
3. Compare the DTM to the data of the higher resolution and
accept points within a certain tolerance band.
The steps 2 and 3 are repeated for each finer level of detail. The
sequence of the hierarchic robust interpolation on a synthetic
laser scanner profile in a build-up area is presented in fig. 1.
Further details about hierarchical robust interpolation, its
implementation in the software package SCOP++ and the
results for an OEEPE dataset can be found in (Pfeifer et al.,
2001).
4 EXAMPLES
In the following the results of these algorithms applied to
different datasets are presented. As it will be seen the procedure
is adapted to the characteristics of each dataset.
4.1 Robust Interpolation of Tacheometric and Photo-
grammetric Data
The Municipality of Vienna ordered a test project for the
determination of a DTM from available digital map data. In the
streets the data was captured with total stations Photo-
grammetric measurements were used for eaves, park areas and a
few other features. The initial classification of terrain points was
performed by point attributes stored in a database. The resulting
dataset with all classified terrain points was very
inhomogeneous due to the fact that there were a lot of points
along the streets and only a few points in backyards and park
regions. Therefore we decided to densify the data to get a
complete DTM over the whole test area. The densification was
performed by exporting a regular 5m raster after a triangulation
of the data. The DTM was computed by linear prediction
considering different point class accuracy. Therefore we were
able to give the originally measured points a higher accuracy in
contrast to the densified data (a small measurement variance of
25cm? vs. 1m? for the densification points).
U
mn
W
SZ S
ZZ
Y
T
M
MR
M N
3
x)
Sees
m
NN
i
0
KX
Kr
A
"
j
X
jn
j
X
A
y
a
i
Y^
ses
S
o OS Sao OO UT
SS 25252 SD OO OS ZZ
SSR SS ZZ
< >
SS SS = = = 233%
= = IT > ZZ ZZ ZZ
— = SSCS 2 OO Eee OS D DUO SS = |
sn EE ZZ
pe DOC OQ > 2 > 22 >
p pp ppp ee ETS m ZZ
Figure 2: Perspective view of the DTM (5m grid with) from the
selected terrain points (from the database) after densification
SO
This leads to a DTM, which is mainly influenced by the
measured data. The interpolated grid is only used to get a
complete surface model. A perspective view of a part of this
model can be seen in fig. 2. The terrain in this area is rather flat,
but a few spots indicate data errors caused by false point
attributes. A closer look at the data in combination with a geo-
referenced vector map of the building blocks showed, that there
are some misclassified points on the houses due to a false
attribute and there where also a few points with obviously false
z-values. Therefore it was necessary to eliminate these gross
errors.