tanbul 2004
V A-basierte
rkehr und
v.lfu.baden-
ril 2004)
‘index html,
i
VisAD,
essed April
va-basierter
rkehr und
rm3D =
icklung des
ndemodelle,
T° Hrsg‘),
6777,
elt
)BUS, Von
ie Nutzung
tionssystem,
'rgreifenden
work — das
item, S. 57-
nschaftliche
Karlsruhe,
rojekt AJA,
Bereichen
, GISterm —
zur Analyse
. 147-169 in
nschaftliche
Karlsruhe,
nerkundung,
2004)
API,
cessed April
Collateral,
oducts/java-
nweltschutz,
seed April
ain Project,
re Maps,
|, (accessed
AN APPROACH FOR THE SEMANTICALLY CORRECT
INTEGRATION OF A DTM AND 2D GIS VECTOR DATA
A. Koch
Institute of Photogrammetry and Geolnformation (IPI), University of Hannover, Germany
Nienburger Strafle 1, 30167 Hannover
koch@ipi.uni-hannover.de
Commission IV, WG IV/6
KEY WORDS: GIS, Adjustment, Integration, Modeling, Visualization, DEM/DTM
ABSTRACT:
The most commonly used topographic vector data, the core data of a geographic information system (GIS) are currently two-
dimensional. The topography is modelled by different objects which are represented by single points, lines and areas with additional
attributes containing information, for example on function and size of the object. In contrast, a digital terrain model (DTM) in most
cases is a 2.5D representation of the earth’s surface. The integration of the two data sets leads to an augmentation of the dimension
of the topographic objects. However, inconsistencies between the data may cause a semantically incorrect result of the integration
process.
This paper presents an approach for a semantically correct integration of a DTM and 2D GIS vector data. The algorithm is based on
a constrained Delaunay triangulation. The DTM and the bounding polygons of the topographic objects are first integrated without
considering the semantics of the objects. Then, those objects which contain implicit height information are further utilized: object
representations are formulated and the semantics of the objects is considered within an optimization process using equality and
inequality constraints. The algorithm is based on an inequality constrained least squares adjustment formulated as the linear
complementary problem (LCP). The algorithm results in a semantically correct integrated 2.5D GIS data set.
First results are presented using simulated and real data. Lakes represented by horizontal planes with increasing terrain outside the
lake and roads which are composed of several tilted planes were investigated. The algorithm shows first satisfying results: the
constraints are fulfilled and the visualization of the integrated data set corresponds to the human view of the topography.
1. INTRODUCTION
1.1 Motivation
The most commonly used topographic vector data, the core data
of a geographic information system (GIS) are currently two-
dimensional. The topography is modelled by different objects
which are represented by single points, lines and areas with
additional attributes containing information, for example on
function and size of the object. In contrast, a digital terrain
model (DTM) in most cases is a 2.5D representation of the
earth’s surface. The integration of the two data sets leads to an
augmentation of the dimension of the topographic objects.
However, inconsistencies between the data may cause a
semantically incorrect result of the integration process.
Inconsistencies may be caused by different object modelling
and different surveying and production methods. For instance,
vector data sets often contain roads modelled as lines or
polylines. The attributes contain information on road width,
road type etc. If the road is located on a slope, the
corresponding part of the DTM often is not modelled correctly.
When integrating these data sets, the slope perpendicular to the
driving direction is identical to the slope of the DTM which
does not correspond to the real slope of the road. Another
reason for inconsistencies is the fact, that data are often
produced independently. The DTM may be generated by using
lidar or aerial photogrammetry. Topographic vector data may
be based on digitized topographic maps or orthophotos. These
different methods may cause inconsistencies, too.
Many applications benefit from semantically correct integrated
data sets. For instance, good visualizations of 3D models of the
topography need correct data and are important for flood
simulations and risk management. A semantically correct
integrated data set can also be used to produce correct
orthophotos in areas with non-modelled bridges within the
CA
DTM. Furthermore, the semantically correct integration may
show discrepancies between the data and thus allow to draw
conclusions on the quality of the DTM.
1.2 Related work
The integration of a DTM and 2D GIS data is an issue that has
been tackled for more than ten years. Weibel (1993), Fritsch &
Pfannenstein (1992) and Fritsch (1991) establish different forms
of DTM integration: In case of height attributing each point of
the 2D GIS data set contains an attribute “point height”. By
using interfaces it is possible to interact between the DTM
program and the GIS system. Either the two systems are
independent or DTM methods are introduced into the user
interface of the GIS. The total integration or full database
integration comprises a common data management within a
data base. The terrain data often is stored in the data base in
form of a triangular irregular network (TIN) whose vertices
contain X,Y and Z coordinates. The DTM is not merged with
the data of the GIS. The merging process, i.e. the introduction
of the 2D geometry into the TIN, has been investigated later by
several authors (Lenk 2001; Klotzer 1997; Pilouk 1996). The
approaches differ in the sequence of introducing the 2D
geometry, the amount of change of the terrain morphology and
the number of vertices after the integration process. Among
others, Lenk and Klótzer argue that the shape of the integrated
TIN should be identical to the shape of the initial DTM TIN.
Lenk developed an approach for the incremental insertion of
object points and their connections into the initial DTM TIN.
The sequence of insertion is object point, object line, object
point etc. The intersection points between the object line and
the TIN edges (Steiner points) are considered as new points of
the integrated data set. Klótzer, on the other hand, first
introduces all object points, then carries out a new preliminary
triangulation. Subsequently, he introduces the object lines,