Full text: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3^4 June, 1999 
133 
RoadNet 
RiverNet 
Major 
is—a 
n [1,-y 
t [1,™ ] 
Road 
—► 
Road 
River 
Purification Plant 
Z* ¡ 
-to 
72,00; 
i 
¿ 
Sedimentation 
i 
Building 
Tank 
i 
* 
I 
I 
* 
i 
i 
i 
i 
Building 
Wing 
Sensor Layer(s) 
3 
» 
CO 
o 
Stripe Circle 
Rectangle 
[1,Q0] ik ClOSe-tO Î 
[1,oo] n Close 
SubStripe y ¡ 
SubRectangle . 
Optical 
IR 
SAR * ; 
' ! * 
Figure 2. Semantic net representing a generic model of a purification plant and its relation to the image data. 
the scene specific knowledge from the GIS. The 2D image 
domain contains the sensor layers adapted to the current sensors 
and the data layer. 
For the objects of the 2D image domain, general knowledge 
about the sensors and methods for the extraction and grouping of 
image primitives like lines and regions is needed. The primitives 
are extracted by image processing algorithms and they are stored 
in the semantic net as instances of the concepts Line Data or 
Region Data respectively. Due to fragmentation, the lines and 
regions have to be grouped according to perceptual criteria like 
continuity, nearness, similarity etc. A continuous Stripe for 
example is represented in the semantic net by a composition of 
neighbouring SubStripes. The sensor layer can be adapted to the 
current sensor type like SAR, IR or optical sensor. For a 
multisensor analysis, the layer is duplicated for each new sensor 
type to be interpreted, assuming that each object can be observed 
in all the images (see Fig. 2). All information of the 2D image 
domain is given related to the image coordinate system. As each 
transformation between image and scene domain is determined 
by the sensor type and its projection parameters, the 
transformations are modelled explicitly in the semantic net by the 
concept Sensor and its specializations for the different sensor 
types. 
The knowledge about inherent and sensor independent 
properties of objects are represented in the 3D scene domain 
which is subdivided into the physical, the GIS and the semantic 
layer. The physical layer contains the geometric and radiometric 
properties as basis for the sensor specific projection. Hence, it 
forms the interface to the sensor layer(s). The semantic layer 
represents the most abstract layer where the scene objects with 
their symbolic meanings are stored. 
The semantic net eases the formulation of hierarchical and 
topological relations between objects. Thus, it is possible to 
describe complex objects like a purification plant as a 
composition of sedimentation tanks and buildings close to a road 
and a river, where the cleaned water is drained off (see Fig. 2). 
The symbolic objects are specified more concretely by their 
geometry and material. In conjunction with the known sensor 
type, the geometrical and radiometrical appearance of the objects 
in the image can be predicted. This prediction can be improved, if 
GIS data of the observation area is available. Though the GIS 
may be out of date, it represents a partial interpretation of the 
scene providing semantic information. Hence, the GIS objects 
are connected directly with the objects of the semantic layer. 
4. INTERPRETATION OF MULTISENSOR IMAGES 
The automatic analysis of multisensor images requires the fusion 
of sensor data. The presented concept, to separate strictly the 
sensor-independent knowledge of the 3D scene domain from the 
sensor-dependent knowledge in the 2D image domain, eases the
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.