|X-B4, 2012
) field campaign en-
Measurements (OM,
)11) frameworks and
Table 1.
is.net/gml/3.2"
s.net/om/2.0"
is.net/swe/2.0"
org/1999/xlink"»
1«/gml:name»
Vv
9T11:10:11«/...
12:20:41«/gm. ..
Vv
"DataDefinition"»
2«/gmli:...
>
ngeospatial .org/
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
A NEW MULTI-RESOLUTION ALGORITHM TO STORE AND TRANSMIT
COMPRESSED DTM
Ludovico Biagi *, Maria Antonia Brovelli, Giorgio Zamboni
Politecnico di Milano, DITAR, Geomatics Laboratory at Como Campus, via Valleggio 11, 22100 Como, Italy
ludovico.biagi@polimi.it, , maria.brovelli@polimi.it, giorgio.zamboni@polimi.it
Commission IV, WG IV/1
KEY WORDS: Internet/Web, LIDAR, Modelling, Archiving, Compression, DEM/DTM, Algorithms, GIS
ABSTRACT
WebGIS and virtual globes allow DTMs distribution and three dimensional representations to the Web users’ community. In these
applications, the database storage size represents a critical point.
DTMs are obtained by some sampling or interpolation on the raw observations and typically are stored and distributed by data based
models, like for example regular grids. A new approach to store and transmit DTMs is presented. The idea is to use multi-resolution
bilinear spline functions to interpolate the observations and to model the terrain. More in detail, the algorithm performs the following
actions.
1) The spatial distribution of the observations is investigated. Where few data are available, few levels of splines are activated while
more levels are activated where the raw observations are denser: each new level corresponds to an halving of the spline support with
respect to the previous level.
2) After the selection of the spline functions to be activated, the relevant coefficients are estimated by interpolating the observations.
The interpolation is computed by batch least squares.
3) Finally, the estimated coefficients of the splines are stored.
The model guarantees a local resolution consistent with the data density and can be defined analytical, because the coefficients of a
given function are stored instead of a set of heights.
The approach is discussed and compared with the traditional techniques to interpolate, store and transmit DTMs, considering
accuracy and storage requirements. It is also compared with another multi-resolution technique. The research has been funded by the
INTERREG HELI-DEM (Helvetia Italy Digital Elevation Model) project.
1. INTRODUCTION
Nowadays, Digital Terrain Models (DTMs, Li et al, 2005)
represent fundamental databases for the Geographical
Information Systems (GIS, O' Sullivan and Unwin, 2003). Up
to few years ago, DTMs were only used in specific applications
of territorial analyses, typically by the scientific community.
The coming and the diffusion of the new technologies based on
Web GIS and virtual globes have changed the perspective:
altimetric analyses and three dimensional representations of the
terrain are still object of new researches but also praxis.
With respect to the traditional programs for the two
dimensional representation, virtual globes have introduced the
third dimension and, consequently, a simpler usage and a
greater visual consistence between the digital representation
and the real world. At present, the new acquisition techniques
provide information with a never seen accuracy. Therefore,
virtual globes are no more merely qualitative viewers for low
resolution global data, but can become scientific instruments to
process and analyze high accuracy geographical information.
Virtual globes and Web GIS cannot be properly compared, but
they share a fundamental principle: the geographic information
(satellite and aerial images, height data, vectorial objects) is
accessed via Web. Particularly, the servers provide the data
according to specific transmission standards that have been
defined mainly by the Open Geospatial Consortium (OGC,
2006, 2010a, 2010b).
The new DTMs provide height information with never seen
accuracy and spatial density (El-Sheimy et al., 2005). However,
in the Web distribution of geographical information, the
databases storage size represents a critical point. Given a
specific interest area, typically the server needs to perform
some preprocessing, the data have to be sent to the client, that
applies some additional processing: the efficiency of all these
actions is crucial to guarantee a near real time availability of
the information.
Generally speaking, the terrain surface is composed by an
infinite number of points: a DTM is obtained by interpolating
the available height observations and extracting a finite dataset
that allows the reconstruction of the whole surface at a given
accuracy. DTMs can be stored and transmitted according to
two different approaches: the data based models and the
analytical models (Biagi et al., 2011).
In the data based models, the DTM is stored and transmitted by
a sample of interpolated heights that arc used to reconstruct
(interpolate) the terrain heights in other points. On the contrary,
an analytical model implies the storage of a dataset of
coefficients that, in a one to one relation with a given function,
allows the computation of the height everywhere.
The purpose of this paper is to discuss a new analytical
approach, that is based on an least squares interpolation by
multi-resolution bilinear splines and has been already discussed
in its preliminary implementation in (Brovelli and Zamboni,
2009). The raw observations are interpolated by a linear
combination of splines with compact support, whose
resolutions and positions vary in space and are automatically
chosen according to the distribution of the raw observations. In
the following, they will be called multi resolution splines. For
each spline, the resolution level, the position and the coefficient
are stored by the server and are transmitted to the clients. The
coefficients and their auxiliaries metadata allow the complete
reconstruction of the terrain at any point and different detail
levels can be provided, according to the required accuracy. The
purpose of the proposed approach is the saving of storage
requirements with respect to the traditional models without any
loss of accuracy.
The following part of the introduction shortly summarizes the
data models and interpolation methods that are typically