Full text: Proceedings, XXth congress (Part 1)

    
    
  
  
  
  
  
   
    
  
    
  
  
  
  
   
  
  
  
  
   
   
  
  
  
   
  
  
  
  
  
  
  
   
  
  
  
  
    
  
  
  
  
  
  
  
  
  
  
  
   
  
  
    
  
   
   
    
     
   
   
    
     
     
Istanbul 2004 
the graphical 
ie and image 
in the image 
inding part of 
d and tested 
vith a digital 
| area. Using 
r parameters 
be masked 
' of different 
ow the only 
ed estimators 
this manner 
| be reduced 
  
igital map 
» 
7 
ids all street 
region have 
image. This 
from image 
ze area from 
cted vehicle 
igation data 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B1. Istanbul 2004 
values (height over ground, attitude of the aircraft). The 
vehicle recognition of LUMOS and "Eye in the Sky" works on 
single images. Approaches based on difference images or 
estimated background images do not work reliably for test 
flights with airplanes due to their fast speed over ground. 
The vehicles have a variety of appearances in the captured 
images depending on sénsor type, object properties and 
environmental conditions (e.g. weather, temperature). But most 
of the traffic objects can be recognized as coarse rectangular 
shapes which contrast more or less with the background. 
Therefore the algorithm searches for characteristic contours (of 
suitable sizes) in edge images. 
If a higher pixel resolution is available (visible camera), 
further properties of vehicles such as the existence of special 
cross edges can be included in the search process. Pixel values 
themselves from the original images give additional 
information for consolidation or rejection of vehicle hypotheses 
or indications of the probable driving direction (Figure 7). 
Evaluating the number of vehicles per scene gives a measure 
for traffic density that can be provided to a central processing 
computer. 
      
  
i 
Figure 7. Vehicle hypotheses of different size classes 
High frame rates allow the determination of velocities. The 
frame based information is now processed for successive 
images in combination to determine vehicle velocities. 
Virtual car positions are obtained from real car position data 
from one image and navigation data from the following image. 
Velocity vectors can be extracted by comparison of these 
virtual car positions and real position data from the second 
image. The repeated recognition of a car in the following image 
emphasizes the correctness of the car hypothesis. 
Assuming a time difference of 1/5 s between two images and a 
pixel resolution of 0.5 m, velocities of 9 km/h can be detected. 
On the other hand, a small car moving with 80 km/h does not 
change its position from image to image by a value of its 
length. 
The vehicle recognition algorithm delivers a coarse size 
estimation so that accepted car hypotheses can be divided into 
a number of length classes. Using three classes has proven to 
be very practical; a larger number reduces the exactness of the 
classification. Thus essential shapes of cars, vans and long 
vehicles can be estimated. 
The traffic data extraction within the airborne traffic 
monitoring projects is done per image and road segment first. 
Densities and/or velocities are calculated for each vehicle class 
from the obtained vehicle numbers and positions. The 
extracted data for single images are combined for completely 
observed road segments using size and position of the 
overflown streets. The calculated average velocities and 
densities per road segment of the digital map and per 
timestamp can used now as input data for simulation and 
prognosis tools. 
5. VALIDATION OF THE SYSTEM 
During the last years, several test flight campaigns within the 
projects LUMOS and “Eye in the Sky" took place to validate 
the quality and reliability of the system and especially of the 
real time image processing part. The applied sensors and 
auxiliary equipment can be integrated within two hours. 
Different scenarios were flown (hovering vs. moving, visible 
camera vs. infrared camera, different flight heights, different 
illumination conditions). 
The evaluation of  georeferencing quality using 
photogrammetric methods for calibration flights gives 
accuracies in the range of one meter which is sufficient for the 
requested applications. 
The comparison of automatic vehicle identification within 
LUMOS vs. manually counted in the images is shown in the 
Figure 9. The image sequence with rate of 12.5 frame/sec was 
captured on May 6, 2003 over Berlin-City Highway from the 
flight attitude of 600 m (Figure 8). The images showing at 
least 60 96 of segment of the road of 90 m length were taken 
into account. 
  
   
  
Figure 8. LUMOS-image of the Berlin City Highway 
southbound (left part of the figure) 
i 
8 | ee eee ms A @—___—|—+_— manvally | 
    
  
  
  
T T T TTT T oT m 
6.7 8: 9 10 11 12 19.44.15 18 17 18 19 20 21 
Image Number 
Figure 9. Vehicle counting automatically with LUMOS vs. 
manually per image 
On average, the number of automatically counted vehicles is 
115 % below the manually generated value. From the 
averaged detection rates and the length of the observed road 
section, vehicle number densities of 59.3 vehicles/km from 
manual counts and of 52.7 vehicles/km from automatic counts 
are obtained. 
To verify a quality of algorithm for velocity determination a 
special test car of DLR have been overflown for some test 
flight while driving. The velocity measured on board of a test
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.