Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B1-3)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part Bl. Beijing 2008 
964 
Integrated multi-sensor systems are increasingly used to provide 
cost-effective and robust solution to navigation. Recently, some 
efforts have been made to improve GPS/INS navigation by 
visual aiding. The horizon line can be detected by an onboard 
camera (Winkler et al., 2004) to provide pitch and roll angles of 
a Micro Air Vehicle (MAV). A sequence of stereo imagery is 
processed to determine the platform trajectory, which can 
bridge the poorly determined sections of the platform trajectory 
by GPS/INS (Tao et al., 2001). 
An integrated GPS/INS/Vision navigation system for UAVs is 
investigated in this paper. A CCD video camera and LRF based 
vision system are used to sense the environment and observe 
relative vertical and horizontal movements over the ground. The 
system modelling strategies and the data fusion procedure for 
this sensor integration scenario are investigated. Numerical 
analysis is included to show the potential performance of the 
proposed triple integration. 
where v bxy are the horizontal translation velocities; i\ y is the 
optical flow measurement of angular rate; cp xy at two horizontal 
axis; rotation rates; r gz is the LRF measurement of the relative 
height from the ground. The integration flow chart is shown in 
Figure 1. 
2. VISION AIDED MOVEMENT ESTIMATION 
A wide rang of vision sensors are available to meet the 
requirement of this particular application, which provides a 
flexible enhancement to the integrated system. The study of 
visual motion analysis consists of two basic issues. One is to 
determine optical flow and/or feature correspondences from 
image sequences, and the other is to estimate motion parameters 
using them. Huang and Netravali (1994) have made a review 
on the algorithms for estimation of motion/structure parameters 
from image sequences in the computer vision context. In order 
to optimally integrate vision component into a GPS/INS 
navigation system, the vision navigation performance should be 
investigated first. 
Figure 1. Vision based navigation flow chart 
There are several error sources in this model. The height from 
the LRF may contain a small fixed offset (up to 10cm) and a 
small scale error (<1%). Optic flow has scale errors. Gyro rates 
also have bias and drift. Other errors include initial attitude 
error and the ground slope etc. The major error sources can be 
estimated using the GPS measurements as discussed below. 
3. INTEGRATED GPS/INS/VISION NAVIGATION 
The integrated GPS/INS/vision navigation system flow chart is 
shown in Figure 2. Two Kalman filters (KF) are employed in 
the system. 
The image sequences taken from the UAV can be used as a 
separate set of self-contained spatial measurements. Given that 
close objects exhibit a higher angular motion in the visual field 
than distant objects, optic flow can be used to calculate the 
range to stationary objects in the field of view, or the true 
velocity of objects with known ranges. In this project, optic 
flow is calculated on the UAV helicopter in real-time at 50Hz 
using an image interpolation algorithm (Srinivasan, 1994), 
which is robust in natural outdoor environments and in the form 
of angular rates of visual motion. 
Two steps are needed to determine translation velocities from 
the optic flow derived angular rates. Firstly, the effects of 
rotation are separated from those translations by subtracting the 
known rotation rates, measured by the onboard rate gyroscopes, 
from the optic flow rates. Secondly, the image motion rate is 
multiplied by the range above the ground estimated by a LRF to 
estimate the mean-centred measurement of both lateral and 
longitudinal velocities (Garratt and Chahl, 2003). The vertical 
velocity relative to the ground can be calculated through the 
measurement of LRF. As all the sensors have measurement 
errors, the key issue here is to model and estimate the errors and 
extract the navigation information from the vision and INS data 
streams. 
Therefore, the UAV horizontal velocity in the body frame can 
be calculated from the optical flow, LRF and gyro angular rate 
with the following formula: 
V*,, = (n„ -<O x r ,. (1)
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.