Full text: Proceedings (Part B3b-2)

560 
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008 
Figure 6. Examples for vehicle detection on motorways (upper image, A96 exit Munich-Blumenau, clipped nadir exposure) and in 
the city (lower image, Munich “Mittlerer Ring”, clipped side-look-left exposure). Rectangles mark automatic vehicle detections, 
triangles point into direction of travel. 
In the next step, the edges belonging to the roadside markings 
still contaminating the vehicle class are eliminated from the 
histogram. As the roads are well determined by the road 
extraction, these roadside lines can be found easily. Thus, the 
algorithm erases all pixels with high edge steepness which are 
laying on a roadside position. These pixels are considered 
mainly belonging to the roadside markings. Thereby, the 
algorithm avoids erasing vehicles on the roadside by observing 
the width of the shape. Since vehicles are usually broader than 
roadside lines, this works unproblematic. Midline markings, 
which were detected by the roadside identification module 
based on the dynamical threshold image, are erased, too. This is 
done in order to reduce false detections, since these midline 
markings may mock up white cars. Then, potential vehicle 
pixels are grouped by selecting neighboured pixels. Each region 
is considered to be composed of potential vehicle pixels 
connected to each other. With the regions obtained a list of 
Potential car pixels Potential car pixels 
detected before closing after closing 
Figure 5. Closing the shanes of notential car nixels. 
potential vehicles is produced. In order to mainly extract real 
vehicles from the potential vehicle list, a closing and filling of 
the regions is performed. This step is shown in fig 5. 
Using closed shapes, the properties of vehicle shapes can be 
described by their direction, area, the length and width. 
Furthermore, it can be checked if their alignments follow the 
road direction, and its position on the road can be considered as 
well. Based on these observable parameters, we created a 
geometric vehicle model. The vehicles are assumed to have 
approximately rectangular shapes with a specific length and 
width oriented in the road direction. Since they are expected to 
be rectangular, their pixel area should be approximately equal 
to the product of measured length and width and vehicles must 
be located on the roads. We set the values for the minimum 
expected vehicle length to 5.7 m and for the minimum width to 
2.6 m. Since, these values are minima constraints on vehicle 
geometry, we are able to detect cars and trucks. In case of 
several detections with very low distances the algorithm 
assumes a detection of two shapes for the same vehicle. Then, it 
merges the two detections into one vehicle by calculating 
averages of the positions. Finally, based on this vehicle model, 
a quality factor for each potential vehicle is found and the best 
vehicles are chosen. 
For traffic monitoring, the camera system is in a recording 
mode, that we call “burst mode”. In this mode, the camera takes 
a series of four or five exposures with a frame rate of 3 fps, and 
then it pauses for several seconds. During this pause, the plane 
moves significantly over ground. Then, with an overlap of 
about 10 % to 20 % to the first exposure “burst”, the second 
exposure sequence is started. Continuing this periodical shift 
between exposure sequences and brakes, we are able to perform 
an area-wide traffic monitoring without producing an 
overwhelming amount of data. Our strategy for traffic 
monitoring from this exposures obtained in “burst mode” is to 
perform a car detection only in the first image of an image 
sequence and then to track the detected cars over the next 
images (fig. 4). 
3.3 Vehicle Tracking 
With a vehicle detection performed on each first image of the 
image sequences as described above, we are able to track the 
found cars over the whole image sequence. For that, a template 
matching algorithm based on a normalized cross correlation 
operator is performed. For each detected vehicle, a circular 
template image is cut off the first image of the image sequence 
at the position of the detected car. Depending on its position 
and direction of travel (obtained from the road database) in the 
first image, a search window is created in the second image for 
each vehicle. Within this search window spanned inside the 
second image, the created template image is cross correlated to 
an area of the same size, while it is moved line- and column 
wise over the search window. The correlation factor is a 
measure of the probability for a hit. The maximum of the
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.