ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision“, Graz, 2002
laser scanning these errors can be split in “real” measurement
errors (e.g. so-called long ranges caused by multi-path effects)
and errors caused by reflecting surfaces above the terrain (e.g.
on vegetation or houses) A method for the automated
elimination or at least reduction of gross errors is necessary.
Systematic errors In the case of systematic errors we have to
distinguish between gross systematic errors, which have similar
characteristics like gross errors, and small systematic errors (e.g.
in the case of LS too short ranges caused by reflection in low
vegetation or even in grass rather than on the soil) The
influence of these small errors will be small in magnitude and it
will be difficult to eliminate these errors without any further
information. Systematic gross errors have the property that they
appear with the same magnitude at many points. One example
for such an error is a shift in the orbit of a LS platform.
3. ALGORITHMS
In the following, algorithms considering the 3 types of
measurement errors (random, gross and systematic) are
presented. If possible, systematic errors in the data should be
avoided in the measurement process or corrected with suitable
models before the surface model generation. However, it can be
seen in the examples section that we are able to eliminate gross
systematic errors (sec. 4.4) if enough error-free data is given.
Small systematic errors can not be excluded from the DTM
derivation.
Our method for the interpolation of randomly distributed point
data - the linear prediction - has quite a long history, but still
plays a centre role. The technique of robust interpolation
developed for the generation of a DTM from airborne laser
scanner data in wooded areas and its extension in a hierarchical
set-up have stand a lot of tests for the DTM generation from
laser scanner data. In the following, these algorithms are
summarized and their consideration of errors are presented. The
formulas can be found in the appendix.
3.1 Interpolation
For the interpolation of the DTM we use linear prediction,
Which is very similar to kriging (Kraus, 1998). This approach
considers the terrain height as a stochastic process. Depending
on the data its covariance function (corresponding to the
variogramm of kriging) is determined automatically. This
function describes the co-variance of measured point heights
depending on the horizontal Euclidian point distance. In the
algorithm used (Gaussian covariance) it attenuates
monotonously. The interpolation is applied patch wise to the
data which results in an adaptive (i.e. patch wise) setting of the
covariance function.
The variance of measured heights (covariance at point distance
zero) contains the variance of terrain heights plus the variance
of the measurement errors. Subtracting the variance of the
measurement error, which is a prior knowledge of the
measurement, yields the variance of the terrain. Details on the
computation of the covariance and the linear prediction (also
known as surface summation with Gaussian basis functions) can
be found in (Kraus, 2000, sec. H.3). The covariance functions
are centred on each data point and factors for these functions
are determined for each point in a linear system of equations.
The sum of the (vertically) scaled functions is the interpolated
surface. With the variance of the measurement errors the
smoothness of the resulting surface can be influenced.
3.2 Robust Interpolation
This method was originally developed for the generation of a
DTM from laser scanner data in wooded areas. For this purpose
a solution was found, which integrates the elimination of gross
errors and the interpolation of the terrain in one process. The
aim of this algorithm is to compute an individual weight for
each irregularly distributed point in such a way that the
modelled surface represents the terrain.
It consists of the following steps:
1. Interpolation of the surface model considering individual
weights for each point (at the beginning all points are
equally weighted).
2. Calculate the filter values' (oriented distance from the
surface to the measured point) for each point.
3. Compute a new weight for each point according to its filter
value.
The steps are repeated until a stable situation is reached (all
gross errors are eliminated) or a maximum number of iterations
is reached. The results of this process are a surface model and a
classification of the points in terrain and off-terrain points.
The two most important entities of this algorithm are the
functional model (step 1) and the weight model (step 3). For the
functional model linear prediction (sec. 3.1) considering an
individual weight (i.e. individual variance for the measurement
error) for each point is used. The elimination of the gross errors
is controlled by a weight function (fig. 3, 6 and 12). The
parameter of this function is the filter value and its “return
value" is a (unit less) weight. The weight function is a bell
curve (similar to the one used for robust error detection in
bundle block adjustment) controlled by the half width value (h)
and its tangent slope (s) at the half weight. Additionally it can
be used in an asymmetric and shifted way (fig. 6) in order to
allow an adaptation to the distribution of the errors in respect to
the "true" surface. The asymmetry means, that the left and right
branch are independent, and the shift means, that the weight
function is not centred at the zero point. With the help of two
tolerance values (t_ and t,, fig. 3) points with a certain distance
to the computed surface can be excluded from the DTM
determination process. In general it can be said that the
algorithm relies on a *good" mixture of points with and without
gross errors in order to iteratively eliminate the off-terrain
points. Finally, the classification into accepted and rejected
points is performed by tolerance values (thresholds for the filter
value). A detailed description of this method can be found in
(Kraus and Pfeifer, 1998).
3.3 Hierarchic Robust Interpolation
As mentioned before the robust interpolation relies on a “good
mixture“ of points with and without gross errors. Therefore this
algorithm is not able to eliminate gross errors, which occur
clustered in large areas. To cope with this problem we use the
robust interpolation in a hierarchic set-up, which is similar to
the use of image pyramids in image processing. With the help of
the data pyramids we provide the input data in a form that we
are able to eliminate all gross errors with this coarse-to-fine
approach. The coarse level surfaces obtained from the coarse
level point sets are used for densification (i.e. adding finer level
point data). The resulting fine level DTM consists of all
measured terrain points.
In previous publications the term residual was used instead
of filter value.
In th
differ
is ade
4.1 |
gram
The
deteri
street
gran
few o
perfo
datas
inhor
along