Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B1-3)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part Bl. Beijing 2008 
1233 
the flight integration filter reads the flight attitude data either 
directly from the specified RS-232 interface in the real-time 
case or from a file in the offline case. With the aid of the flight 
attitude state it is possible to compute the relation between 
WGS84 object coordinate system and camera coordinate system 
for each frame. For this task, additional information such as the 
interior orientation of the used video camera as well as a system 
calibration is required. This system calibration describes the 
misalignment angles between INS reference frame and camera 
coordinate system. Details of the two required calibration 
processes and the implemented direct geo-referencing algorithm 
can be found in (Eugster and Nebiker, 2007). Because of the 
lower frequency of the flight data stream the required sensor 
model has either to be interpolated in the offline case or 
predicted in the real-time case. The entire filter can be 
parameterised by the user via a XML instance. In this XML file 
the current interior orientation parameters and misalignment 
angles can also be defined. So the output of this filter is the time 
stamp and the sensor model for each frame. 
Figure 6: Filter graph for video processing 
4.2.2 Video imagery integration 
The last step in the video processing chain is the integration of 
the geo-registered video stream into the virtual globe 
technology i3D. This integration process has been implemented 
by means of the i3D Filter depicted in Figure 6. This filter 
encapsulates a fully operational i3D Studio software system. In 
order to realise the augmented or virtual monitoring video 
imagery integration, the filter reads each frame's sensor model 
which is delivered by the flight data integrator filter. In case of 
the augmented monitoring integration, the virtual camera of the 
virtual world, i.e. the observer's view, is controlled by the 
sensor model which has been encoded in the video stream. Thus, 
the video frame can be superimposed with the graphics output 
of the i3D terrain engine. The achievable overlay quality 
depends on the geo-referencing accuracy and the accuracy of 
the rendered virtual world objects. This integration approach 
allows for the real-time mapping of arbitrary geo-objects. With 
the aid of the available terrain model underlying the virtual 
globe, an object identified in the video can be manually picked. 
Based on the available image coordinates and the known sensor 
model, the unknown 3D object coordinates can be determined 
by intersecting the object ray with the currently loaded terrain 
model or with 3D objects present in the 3D scenery. In the 
virtual monitoring integration, the UAV platform, the video 
camera and the current view frustum are drawn in the virtual 
world. These objects and especially the view frustum are 
controlled by the available sensor model for each frame. 
Parallel to this graphical output the video stream can be 
rendered in a separate window. The result of the two video 
imagery integration approaches is visualised in Figure 3 and 4. 
In the offline mode, the video imagery processing solution 
additionally supports features such as play, stop, pause, skip 
forward and backward. Finally, the implemented i3D 
collaboration framework allows for the real-time 
synchronisation and sharing of the mapped geo-objects or the 
UAV position and attitude information, for example, with 
operations control centres or to other clients in the virtual world 
(cp. Figure 5). 
5. APPLICATIONS AND RESULTS 
5.1 Application scenarios 
The presented prototype solution consisting of a) a mini or 
micro UAV system, b) the proposed video processing chain and 
c) a virtual globe technology such as i3D offers a great potential 
to realise new applications in various application areas. The 
foundation for all applications is the i3D virtual globe 
technology. The proposed video imagery integration processing 
chain allows for the real-time or near real-time video 
integration into the 3D virtual world. With the aid of the two 
presented integration strategies arbitrary geodata content can be 
extracted from the video stream. The integrated collaboration 
framework additionally allows for the exchange of extracted 
geodata content with other involved clients of the 3D 
geoinformation solution. With this architecture it is possible to 
immediately capture and process geo-referenced video imagery 
at the ground control station of the UAV system. If required, 
the extracted geospatial data can be distributed in real-time, for 
example, to control rooms where this information can be 
visualised, further processed and/or stored. 
Typical application areas are in the domains of safety and 
security. Border patrol, forest fire monitoring, pipeline 
inspection or traffic surveillance are a few promising examples. 
Also search and rescue applications which support the decision 
making in cases of natural disasters like earthquakes, forest 
fires or floods are promising candidates. Additionally, the 
augmented monitoring video imagery integration is well 
suitable to realise a virtual piloting solution for controlling and 
piloting unmanned aerial platforms. In this scenario, the pilot 
view based on the transmitted video stream can be 
superimposed, for example, with flight obstacles. 
5.2 Achievable geo-registration accuracy 
Figure 7 shows the estimated a priori geo-registration accuracy 
of a low-cost solution in relation to the image-to-object distance. 
It can be seen that the presented direct video geo-referencing
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.