Full text: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

International Archives of Photogramme try and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
canopy), last pulse registration should be chosen, if the final 
elevation model shall describe the ground surface. For the 
reconstruction of roofs we use images of the first pulse. 
A simple way to visualise the elevation data is to assign a 
brightness value to the z-coordinate. Combining this brightness 
with the z-coordinate in a 3D view leads to a plastic appearance 
of the raster data (Fig. 1). A more realistic appearance can be 
obtained by using an aerial image to texture the elevation data 
(Fig. 2). Nevertheless, in such a representation a building 
model is not explicitly available. 
The manual construction and update of 3D building models is 
time consuming and expensive. That is why some authors 
propose approaches to automatically exploit elevation data. 
Lemmens et al. (1997) present an approach, with similarities to 
this paper, for the 3D modelling of buildings with one height, 
using DEMs from airborne laser scanners and 2D digital maps. 
In Hug and Wehr (1997), surface areas belonging to buildings 
are detected in laser images by morphological filtering and 
examining local elevation histograms. The reflectivity obtained 
by processing the return signal energy is additionally used to 
separate segments of artificial objects from vegetation. 
Polygonal 3D-descriptions of buildings were not derived. 
Geometric constraints in form of parametric and prismatic 
models are used in Weidner and Fórstner (1995) to generate a 
polygonal description of a building with flat roof or a 
symmetric, sloped, gable roof. The reconstruction of more 
complex roof shapes can be found in Haala and Brenner 
(1997). A ground plan of a building is used to derive roof 
hypotheses. Any roof construction based on this approach 
provides incorrect results, if the roof structure inside the ground 
polygon does not follow the cues that can be obtained from the 
ground polygon (Haala and Brenner, 1997). A collection of 
papers on 3D urban modelling, mapping and visualisation can 
be found in Shibasaki, 1998. 
In our approach, we also combine elevation data and map data 
to extract buildings but the map data is not used to reconstruct 
the building roof. 
2. SCENE ANALYSIS 
The automatic generation of urban scene descriptions consists 
of a multistage process, using different information sources as 
maps, elevation data, aerial images. We describe structural 
relations of the object models by productions. The hierarchical 
organisation of object concepts and productions can be 
depicted by a production net, which, comparable to semantic 
networks, displays the part-of hierarchies of object concepts. 
Production nets are preferably implemented in a blackboard 
architecture in the environment system BPI (Blackboard-based 
Production system for Image understanding; see Stilla, 1995). 
This paper focuses on a combination of elevation data and map 
data to extract buildings. In a first step, we analyze the digital 
map by means of a production net in order to obtain a simple 
urban model consisting of prismatic objects. 
3. PRISMATIC OBJECTS 
We use a large scale (1:5000) vector map, which is organized in 
several layers, each of which contains a different class of 
objects (e.g. streets, buildings, etc.). The topological properties 
connectivity, closeness, and containment of map-lines are tested 
by a production net of a generic model. The aim of the analysis 
is to separate parts of buildings, to determine encapsulated 
areas and to group parts of buildings. The output of the analysis 
is a hierarchical description of the buildings or complexes of 
buildings (Stilla and Michaelsen, 1997). 
Fig. 3. Overview of the procedure for generation of 3D 
prismatic models and their visualisation (two bottom 
figures) from maps and elevation data.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.