Full text: XIXth congress (Part B5,1)

  
Handmann, Uwe 
  
5 SCENE INTERPRETATION 
The scene interpretation interprets and integrates the 
different sensor results in order to extract consistent, 
behavior-based information. The scene interpretation is 
subdivided into a behavior-based representational and a 
scene analysis part (fig. 6). 
5.1 Behavior-based representations — 
The data prepared by object-related analysis have to be 
integrated in order to detect and evaluate inconsisten- 
cies and discrepancies. The incoming data (in sensor 
coordinates) are transformed to a common description Scene Interpretation 
base in the data integration. Actually object- and lane- 
information are transformed to world coordinates with 
respect to the moving observer. The positions of the de- 
tected objects are determined in a bird's-eye view of the 
driving plane. The transformation rules follow the given position and direction of each sensor (e.g., the position and pitch 
angle of CCD-cameras in the car) and the position of the car in the lane. The sensor parameters are determined from 
the knowledge base (e.g., transformation equations for CCD-Cameras using a pinhole model presented by (Brauckmann, 
1994)). Physical considerations concerning the movement and the position of potential objects are incorporated as well as 
constant parameters (e.g., length of a vehicle according to its classification). Explicit information from the knowledge base 
(e.g., physical rules and traffic rules, sensor evaluation factors) is applied in the knowledge integration. In this module, the 
synchronized information from the different sensors is coupled using knowledge. In this part e.g., ROI detected above the 
horizon are eliminated and lane positions are determined relative to the moving observer according to lane-information. 
The properly organized information is presented to the behavior-based representations of the situation as well as to an 
  
Figure 6: Structure of the scene interpretation. 
  
  
  
  
  
  
(d) 
Figure 7: Image with segmentation results (a), tracking results (b), lane information (c) and bird's-eye view (d). 
internal memory implemented as an object list. The data representations comprise a bird's-eye view representation, a 
representation containing free driving space and trajectory information as well as object-related information if the actual 
task requires it. Those representations are organized dynamically for stabilization of data and for performance of planning 
tasks. An example for a dynamically organized bird's-eye view representation is shown in fig. 7(d). The results of the 
object-related analysis (segmentation, object tracking) and the lane-information are integrated to build the bird’s-eye view 
representation. 
The internal memory is fed by the results of knowledge integration as well as by results of the dynamics of the representa- 
tions. The object list in the internal memory contains information of previously detected objects to enable time-stabilized 
object detection and to determine object-related results. According to the data of the objects accumulated over time and 
sustained by the representation-dynamics, an object-related Time To Contact Value (TTC mentioned by (Noll et al., 
1995)). The object and the observer are supposed to collide if they would occupy the same space at the same time. The 
space is determined by the intersection of the estimated trajectories of the object edges. 
  
350 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.