Full text: Technical Commission VIII (B8)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B8, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
  
Figure 1. The 3K+ camera system 
The data from the GPS/Inertial system are used for direct 
georeferencing of the images. Upon receiving the pre-processed 
data from the airplane, the mobile ground station processes the 
data and provides them to the end users via web-based portals 
(Kurz, 2011). 
  
GPS/IMU 3K* camera system CHICAGO on DLR glider 
  
  
CHICAGO wing pod 
  
  
Microwave datalink Ground station 
  
  
   
  
Figure 2. Airborne hardware components and data flow of the 
3K camera system for the real time processing chain 
2.2 Onboard processing 
The software running on the onboard computers must be 
capable to process the incoming images in a way that the 
produced data received on the ground is still up to date and of 
use for the rescue forces. Moreover large data pile-ups caused 
by a slow onboard processing module can stall the processing 
system and must be avoided. These problems are quite likely to 
happen because the detection and tracking of vehicles or 
persons need high-resolution images in rapid sequence leading 
to large amounts of data inside the processing chain. 
Therefore, each camera has one dedicated computer for 
processing the images. Before the actual detection of humans or 
vehicles starts each image is pre-processed in two major steps. 
Firstly, after the image is downloaded from the camera the IGI 
system sends an event date with the exact time stamp, location, 
and orientation of when the image has been taken to the 
computer. The synchronization is done with the help of the 
camera's external flash connector. Secondly, georeferencing 
and orthorectification take place. The interior and exterior 
camera parameters, determined by in-flight calibration (Kurz, 
2012), and an SRTM DEM are loaded before take-off. After 
determining the image bounding box the processor calculates 
the intersection of each image ray with the sensor plane on the 
  
34 
graphics processing unit (GPU) rather than on the host's CPU. 
The program works with NVIDIA's CUDA software library 
and uses its special memory areas to accelerate the 
orthorectification. As each pixel can be orthorectified 
independently this calculation is well-suited for GPU 
architectures. Only by leveraging the image-processing 
capabilities of the video card’s GPU it is possible to provide 
high-resolution orthorectified images to the thematic processors 
on time. 
One of the thematic processors extracts fully automatically road 
traffic data from orthorectified images during the course of a 
flight. This processing module consists of a vehicle detector and 
a tracking algorithm. Images are acquired for traffic processing 
in a so called burst mode. It consists of brief image sequences 
of few images (3-5 images per burst) with a high repetition rate 
(up to 3 fps). Every 5-7 seconds a burst is triggered, depending 
on flight height and flight speed, so that there is nearly no 
overlap between images of different bursts. This reduces the 
amount of image data produced in comparison to a continuous 
recording mode at high frame rate significantly. With this 
technique we are able to perform automatic traffic data 
extraction in real-time. To each first image of the burst, road 
axes from a Navteq road database are overlaid, and vehicles are 
detected along these roads. Vehicle detection is done by 
machine learning algorithms AdaBoost and support vector 
machine, which had been trained intensively on the detection of 
cars offline prior to flight (Leitloff, 2010). Vehicle tracking is 
performed between consecutive image pairs within an image 
burst, based on the vehicle detection in the first image. In the 
first burst image a template is produced for each detected 
vehicle and these templates are searched for in the consecutive 
images by template matching (Rosenbaum, 2010). 
3. SYSTEM PERFORMANCE 
In the following the quality and performance of the onboard 
processing chain is evaluated. At first the quality of the 
produced data is discussed and then the real-time performance 
of the system. 
3.1 Quality of Service 
Products like ortho mosaics and traffic parameters should be 
generated with sufficient geometric accuracy; 3 m absolute 
horizontal position accuracy is assumed as sufficient in 
particular for the import into GIS or road databases. Table 1 
lists the horizontal and vertical georeferencing accuracy 
separated for the post processing and real time case. For the 
latter, the images are orthorectified based only on GPS/Inertial 
system data and the global 25m-resolution SRTM DEM. 
  
  
  
  
Post processing / Direct georeferencing: 
Bundle adjustment 
Otheor RMSE empir RMSE' 
X |0083m |0.138m <3m 
Y |0078m | 0365m <3m 
Z | 0400m | 0.452 m n.a. 
  
  
  
  
  
  
"Without DEM error, assuming GPS position error «0.1m, angle error of 
inertial system <0.05°, flight height 1000m AGL 
Table 1. Georeferencing accuracy of 3K+ camera system 
given a bundle adjustment (left). It is only based on 
GPS/Inertial system measurements (right). 
Inter 
Figure 2 € 
algorithm. 
showing tl 
vehicle ve 
from that : 
results in 2 
Quality — 
  
Figure 2. 
3.2 Real 
During tl 
forces of 
buildings 
orthorecti 
Therefore 
is able to 
The ima; 
system | 
onboard | 
As statec 
along fli; 
setup (b 
orthorect 
mainly d 
Table 21 
The 3K4 
and can « 
of 6.5 cn
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.