Full text: Technical Commission VIII (B8)

1e XXXIX-B8, 2012 
than on the host's CPU. 
CUDA software library 
eas to accelerate the 
can be orthorectified 
well-suited for GPU 
the image-processing 
it is possible to provide 
o the thematic processors 
s fully automatically road 
s during the course of a 
of a vehicle detector and 
red for traffic processing 
of brief image sequences 
vith a high repetition rate 
st is triggered, depending 
that there is nearly no 
bursts. This reduces the 
mparison to a continuous 
significantly. With this 
automatic traffic data 
image of the burst, road 
overlaid, and vehicles are 
> detection is done by 
ost and support vector 
sively on the detection of 
010). Vehicle tracking is 
e pairs within an image 
in the first image. In the 
luced for each detected 
ed for in the consecutive 
ium, 2010). 
RMANCE 
formance of the onboard 
first the quality of the 
he real-time performance 
fic parameters should be 
accuracy; 3 m absolute 
sumed as sufficient in 
road databases. Table 1 
oeoreferencing accuracy 
| real time case. For the 
ised only on GPS/Inertial 
tion SRTM DEM. 
Áirect georeferencing: 
RMSE' 
«3m 
<3m 
  
n.a. 
n error «0.1m, angle error of 
ight 1000m AGL 
of 3K+ camera system 
(left). It is only based on 
rements (right). 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B8, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
Figure 2 shows results of the vehicle detection and tracking 
algorithm. Detected and tracked vehicles are marked by arrows 
showing the direction of travel with its color representing the 
vehicle velocity. The correctness of the traffic data obtained 
from that scene was 95 %, the completeness was 85 %. This 
results in a total quality of 81 % which is defined as 
Onalte= truepositives 95 qq 
  
truepositives + falsepositives + falsenegatives 
  
Figure 2. 3K- scene obtained on 17? September 2011 close to 
Cologne/Germany at a flight height of 1500 m AGL. 
3.2 Real-time performance 
During the first hours after arriving in an affected area rescue 
forces often only need distances or spatial dimensions of 
buildings or bridges to start working, so up-to-date 
orthorectified images might be all they need in the beginning. 
Therefore, it is interesting to know how fast the onboard system 
is able to provide rescue forces with these images. 
The image acquisition and the synchronization with the IGI 
system hardly take any time compared to the succeeding 
onboard processing modules and are neglected in the following. 
As stated earlier, the 3K+ system can be installed across and 
along flight track, respectively. This test uses the across track 
setup (because of the larger coverage) in order to show 
orthorectified land area as a function of processing time. It 
mainly depends on the flight height and the changing GSD. 
Table 2 lists typical flight heights and the resulting swath width. 
The 3K+ system can cover a swath of 1280 m at 500m AGL 
and can orthorectify 20 km? in 3.5 minutes (Fig. 3) with a GSD 
of 6.5 cm. 
  
3K+ camera system 
  
Viewing directions 1x nadir, 2x £32° / variable 
  
FOV +52° across 
  
Coverage / GSD @ 500m 1280m x 240m / 6.5cm nadir 
  
Coverage / GSD @ 1000m | 2560m x 480m / 13cm nadir 
  
  
  
  
Coverage / GSD @ 3000m | 7680m x 1440m / 39cm nadir 
  
35 
Table 2. Coverage and GSD of the 3K+ camera system 
  
140 T 
xxx 500m AGL A 
e*e 3000m AGL oo 
1200 eee 1000m AGL os 
  
  
  
  
  
  
S 
o 
$ 
$ 
  
S 
1 
  
60 md 
  
Coverage in km? 
i 
40 eot aa 
9 ey Ly 
a ao echo? 
  
x xX XXXXX 
d o2o29? ka Kane x jx A 
aco 
KICK 
Nm xc ic x (0008 
298 23x x4 XXXAXÁ 
96 50 100 150 200 
Processing time in seconds 
  
  
  
  
  
  
  
Figure 3. Coverage of onboard processed orthophotos as a 
function of processing time during a flight. 
If the operators at the ground station are more interested in a 
larger overview of the scene the system can cover a swath of 
almost 8 km if it climbs to 3000m AGL. 39cm-resolution 
images of almost 140 km? can be sent to the ground after the 
same processing time of 3.5 minutes (210sec in Fig.3). An 
important result is the almost linear progress of the coverage at 
all considered flight levels. If it were a logarithmic progress it 
would mean that the image processing time cannot keep up with 
the cruising speed of the airplane, which is typically at 136 
knots due to the shutter speed of the cameras. In addition there 
are almost always longer pauses between single flight strips 
because of heading for other areas or limitations by flight 
control. An example of in-flight generated images is shown in 
Figure 4. In this case, the images cover an area of 35km”. 
A good trade-off between coverage and resolution is flight 
heights between 1000m and 1500m AGL because in this case 
the GSD is small enough to get good results in further object 
detection algorithms like the traffic processing module. The 
already mentioned scene in Figure 2 was processed as part of a 
larger image. When flying in traffic detection mode the time 
between the bursts is used by the traffic processor to process the 
last burst. With the current version (described in Section 2.2) it 
is possible to complete the vehicle detection and tracking before 
the next image burst is taken. After compressing the results the 
system sends them directly to the ground with an average data 
rate of 7 Mbit/s which is high enough to send all processed data. 
These results show that the whole onboard processing system is 
able to operate in real time. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.