Full text: XVIIth ISPRS Congress (Part B5)

Cameras 
TV-monitors 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
Fig.2: Architecture of BVV 2 
ing a loosely coupled system requiring only modest bus 
bandwith. Using the communication services of the dis- 
tributed operating system kernel, a desired processing 
structure can be defined entirely by downloadable appli- 
cation software. Thus, task specific cooperating processor 
clusters can be formed. Typically, such a CPU group 
consists of several ’Parallel Image Processors’ (PP) at a 
low hierarchical level that perform local feature extraction 
operations on their windows. The PPs of a group may be 
coordinated by a General Purpose Processor’ (GPP, more 
recently renamed 4D-object processor 4D-OP) at a higher 
hierarchical level, which interprets the PPs’ feature-data 
and controls or guides the activities of its PP group (see 
also fig.2). 
Low level control of motors, collection of information 
from all kinds of sensors and preprocessing of sensor data 
is performed by a SMP-System (Intel 80186), which holds 
a number of I/O function boards. The navigation software 
and the overall management runs on a 80386 PC-AT 
compatible computer, which also serves for mass storage, 
real-time data logging and software development. Com- 
munication between the three computer systems is done 
via an IEC bus. 
The second testbed serves as "rolling fieldlab' for com- 
puter vision research in outdoor applications with human 
operators and supervisors on board and is known as 
"VaMoRs'. This vehicle has drawn international attention 
by the demonstration of autonomous road-following at 
speeds up tp 96 knyh in 1987. This demonstration set a 
world record for autonomous road vehicles. Beside the 
physical appearance, the main differences between the 
two testbeds, looked at from a navigational point of view, 
are found in the more powerful image processing system 
and the sophisticated pointing device for the camera. An 
electromechanical pan-tilt platform carrying two CCD 
cameras mounted in the center behind the front wind- 
shield, hanging from the roof, provides fast 2-axis viewing 
direction control. Its control is part of the vision-system. 
Equipped with lenses of different focal length, a scene can 
be analysed in the wide angle image for global features 
(such as the road boundaries) and with more detail in the 
enlarged image (e.g., for focussing on objects or obstacles 
    
further away). The camera pointing capability allows ac- 
tive search and tracking, e.g., for initial self orientation, 
motion blur reduction and continuous road tracking while 
driving. For obvious reasons it is desirable not to loose the 
road from the camera's field of view when the vehicle 
changes its heading or enters a tight curve. Specially for 
obstacle recognition it is essential to have the camera 
actively center that part of the scene where potential 
obstacles are of interest. Instrumental to the success of 
’VaMoRs’ were two key elements: the 4D-approach, as 
the core of the guidance system, and the BVV 2 for 
real-time image sequence processing. In the meantime, 
VaMoRs has been reequipped with a more powerful trans- 
puter network for both image sequence processing and 
situation assessment as well as vehicle control. 
3. Perception of the environment 
  
The way of perceiving the environment strongly deter- 
mines the kind of intelligent behavior exhibited by robots. 
In this section the potential of optical sensors will be 
shown. 
Sensors generally used for solving the navigational task 
can be divided into two categories[Cox, Wilfong, 90]: 
First, there are ’dead reckoning’ sensors, which allow the 
position of the robot to be estimated by integrating sensor 
information over time. Dead reckoning is usually per- 
formed by odometry and inertial guidance sensors. 
Odometry is the most common form of sensors available 
on mobile vehicles equipped with wheels. Using dead 
reckoning, position errors may grow without bounds un- 
less an independent position fix is used to reduce these 
errors. This is where the second category plays its role. 
External or environmental sensors are able to provide 
information on the surrounding environment. Among the 
many sensors and processing schemes that computer vi- 
sion has to offer, dynamic vision is the one with the most 
potential in perceiving the environment. 
The various techniques being investigated for object 
detection and tracking can be roughly categorized into a) 
edge based using intensity images b) region based using 
intensity images and c) region based using color images 
[Kuan et al. 86], [Turk et al. 87], [Wallace et al. 86]. The 
approach applied here is of the first type as far as the image 
processing level is concerned. Though this method might 
be considered the most susceptible to real-world distur- 
bances like shadows or ill-defined, ambiguous edges, it 
has been shown that in combination with a proper guiding 
and interpretation mechanism, it is efficient and robust at 
the same time.. 
On the feature extraction level, local, oriented edge 
operators both for detection and tracking are used. Corner 
finding operators can be realised by searching for adequate 
constellations of two edge elements. The edge operators 
are entirely software based, running on a standard micro- 
processor (8086 or 80286/8 MHz; T222/20MHz transput- 
ers more recently). They work directly on raw image 
  
     
  
  
  
  
  
  
  
  
  
  
     
   
   
   
  
    
   
  
    
   
   
  
    
  
  
     
  
    
  
    
   
    
   
  
    
    
    
   
  
    
  
  
  
  
    
   
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.