International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
adopted to store and compute DTMs. Section 2 discusses our
new approach and its storage requirements. In Sect. 3, the
performances of the different models are analyzed, by
comparing their application to a case study. The conclusions
and the future developments follow.
1.1 Data based Models
Different data models can be adopted to store and transmit a
DTM. Contour lines, grids (or elevation matrix) and Triangular
Irregular Networks (TIN) are the standard.
Contour lines are obtained by connecting with a line all the
points with the same height. The lines are drawn at given,
equally spaced in height, intervals. Contour lines are useful to
visualize heights on maps in 2D applications, but seldom are
used for analysis purposes, and are stored and transmitted
following the general rules of vector objects.
Gridded DTMs (in the following DTMGrip) are georeferenced
as regular grids of nodes, whose heights are stored. The storage
of a grid requires a set of metadata that allow its georeferencing
(see Sect. 3) and are listed in the so called header. The heights
are stored in an ordered sequence. DTMorm is a very simple
conceptual model and can be casily accessed, visualized and
spatially analyzed by map algebra. However, the choice of the
grid resolution is a crucial point, because the storage size is
inversely proportional to the square of the gridding interval. If
rough terrain (for example mountains) alternates to flat terrain
(plains), the high resolution needed to accurately describe the
first causes a useless redundancy in the second. To
continuously describe the heights between the nodes, either a
bilinear or a bicubic interpolation is typically applied.
In a TIN, the DTM (in the following DTMqq) is described by a
set of planar triangular faces that are obtained by connecting
sparse points, whose horizontal coordinates and heights are
given. Usually, the Delaunay criterion is applied to triangulate
the points. By a TIN model, more points can be stored where
the terrain is rough and less points are used in flat areas. Each
point of a TIN is represented by its three (X, Y, height)
coordinates. Moreover, to reconstruct the topology of the
triangles, the labels of the three vertices of each triangle are
needed. This simple data model requires long computation
times for the processing and the analysis of the 3D surface:
therefore, in the practice, more complex topological models are
applied, like for example the node based, the triangle based or
the edge based data structures. These models reduce
computation times but require an overhead of information that
is stored and transmitted to the clients. When a TIN model is
used, the height within each triangle is linearly interpolated
from its three vertices.
1.2 Interpolation techniques
To produce a DTM, several interpolation techniques exist: a
first classification can be into exact and approximate
interpolations. An exact interpolator passes for all the
observations and allows the complete reconstruction of all the
discontinuities existing in the dataset. However, the observation
errors are not filtered and propagate into the model. A classical
example of exact interpolator is given by the /nverse Distance
Weighting (IDW). Approximate interpolators apply statistical
methods to estimate a smoother function from the observations:
in this way, the errors can be filtered and both the observations
accuracy and the function correctness can be assessed.
However, actual details and discontinuities can be lost in the
smoothing. Local Polynomial (POL) is an approximate
interpolator when the coefficients are fewer than the
observations and are estimated by least squares.
In the deterministic interpolation, either exact or approximate,
the analytical model of the surface is a priori chosen and the
observations are used to estimate it: IDW and POL are
examples of deterministic interpolators. In the stochastic
interpolation (Christakos, 1992), the observations are
considered as a sample of a random field (the surface) that is
completely described by spatial stochastic properties like, for
example, the covariance function. The stochastic properties are
estimated analyzing the observations and then applied to
interpolate the surface. Collocation and kriging are the classical
examples of stochastic interpolators.
Note that the most popular interpolation techniques, as reported
in scientific and technical literature, cannot be easily and
efficiently used to implement an analytical model because the
interpolating functions cannot be described by a small number
of parameters or coefficients. In IDW and POL, the
interpolation coefficients and domain are a function of the
positions of both the interpolation point and the observations:
to reply the model, all the observations must be stored and
distributed. Radial Basis technique uses a linear combination of
radial functions that interpolate exactly the observations and
are characterized by the minimum curvature. These methods
(Regularized Spline, Spline with Tension, Thin Plate Spline)
differ in the function choice, all of them could be analytically
described by a finite set of coefficients but the needed
coefficients are at least as many as the raw observations.
Let consider a stochastic interpolator, for example the
collocation. The height in a point is provided by the
h(P) =c'&, where & is the vector of the observations
multiplied by its inverse covariance matrix, c is the cross-
covariance vector between the point and the observations. c
can be built by knowing the covariance function of the surface
and the positions of the observations, while & needs to be
stored: also in this case, an analytical model would require as
many data as the original observations.
The classical bilinear splines estimated by least squares provide
a twofold interpretation, because they can be thought as both
data based and analytical models. Given the required spatial
resolution, the observations are interpolated to estimate the
coefficients of the splines, that are used to predict the heights
on a regular grid, that represents the data based model. If the
splines and the grid have the same spatial resolution, the
coefficients of the splines and the heights of the relevant grid
nodes are equal. Moreover, the coefficients of the bilinear
splines used to interpolate from the four neighboring nodes of a
regular grid are exactly the relevant four heights: indeed each
local bilinear spline assumes the maximum in its node, while
annihilates on all the other nodes. In this case, the analytical
model has exactly the same complexity of the data based
model.
The adoption of a new multi-resolution splines interpolation
has been studied, that represents a true analytical model and
provides actual storage and distribution saving with respect to a
data based model.
2. THE MULTI-RESOLUTION SPLINES APPROACH
The approach has been previously discussed in a preliminary
way by (Brovelli and Zamboni, 2009): it is an approximate
deterministic method whose estimation principle is based on
Least Squares (LS, Kock, 1987).
The founding idea is to combine splines with different width in
order to guarantee the resolution adequate to the data density in
every region of the field, exploiting all available information
implicitly stored in the sample. Different levels of splines,
cori
leve
locz
I.
of a
Let
pe
PA
The
h(
whe
Ax,
leve
Gr
In t]
Yo
whe
A, i
obtz
Twi
app
Giv
spli
Moi
Obs
splii
cho
COIT
valı
spat
syst
con
indc
acti
inte
The
new
imp
Let
alre
coei
is b
the