Full text: Proceedings, XXth congress (Part 5)

   
'anbul 2004 
ne 
b 
a. 
Calibration 
ude 
ing 
i) in the 
(3) 
or of the 
n-frame 
y stereo 
ween the 
ime) and 
‘time of 
ned by 
: C-frame 
ation 
nthe C- 
camera 
tion 
+ 
> antenna 
  
nula 
and GPS 
>-dependent 
Il as image 
termination 
ame has its 
ts z-axis is 
tre and the 
are defined 
h respect to 
is therefore 
- 
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004 
  
If instead of a frame camera, either a pushbroom scanner or a 
Lidar system are modeled, the only change necessary is in the 
term. 7 . It will have the form for : 
0 
B A pushbroom scanner = a 
o + 2 p 
—d:sina 
B A LIDAR scanner = 0 where, 
—-d:cosa 
d Raw laser range (distance) in laser frame 
o Scanner angle from nadir along y-axis of 
laser frame, see Figure 5 
  
Figure 5: LIDAR Scanner Angle and Distance Measurement 
[n addition, the misalignment matrix R c and the offset vectors 
2 GPS : ; : 
ES and r;ye have to be determined by calibration. A 
detailed discussion of implementation aspects can be found in 
El-Sheimy (1996) for land-vehicle applications and in Mostafa 
and Schwarz (2000) for airborne applications. 
Using Newton’s second law of motion in a rotating coordinate 
frame, the inertial sensor measurements - specific force t? and 
measured in the b-frame — can be 
; b 
angular velocity @; » 
transformed into an Earth-fixed frame, say the Conventional 
Terrestrial Coordinate Frame(e). The resulting system of 
differential equations is of the form 
i | v^ 
® | 7 |Rif*-2Q ve +g° | (2) 
RC RS -0^) 
b. . . 
where e is the skew-symmetrical form of the angular 
: b 
velocity vector Qi, and the dot above a vector (bold lower 
case) or a matrix (bold capital) indicates differentiation. Note 
that the m-frame in the geo-referencing formula (1) has been 
replaced by the e-frame. This system is integrated to yield the 
parameters on the left-hand side, namely position, velocity, and 
; ‘| e 
the orthogonal rotation matrix Ry between the b-frame and 
the e-frame. The determination of this time-variable matrix is 
one of the central tasks of geo-referencing. 
The integration of the system of differential equation is started 
by initializing the parameters on the left-hand side of the 
equation. Traditionally, this is done in stationary mode. In that 
case, the initial velocity is zero, the initial position is obtained 
from GPS, and the initial orientation matrix is determined by an 
alignment procedure that makes use of accelerometer leveling 
and gyro compassing. The alignment is usually done in two 
steps. In the coarse alignment simplified formulas are used to 
obtain pitch and roll from accelerometer measurements and 
azimuth from gyro measurements, within an accuracy of a few 
degrees. In the fine alignment small-angle error formulas are 
used in a filtering scheme to obtain more accurate estimates of 
; ; € ; 
the parameters. The rotation matrix Ry can then be obtained 
from the estimated pitch, roll, and azimuth or an equivalent 
parameterization, as for instance quaternions. This is the 
standard alignment procedure if sensor measurements from a 
navigation-grade inertial system are available. For medium and 
low accuracy systems, this method cannot be applied. Because 
: : : b 
of the unfavorable signal-to-noise ratio for Q;, the gyro 
compassing procedure will not converge. Thus, stationary 
alignment cannot be used with MEMS-based or other low- 
accuracy IMUs. 
The alternative is in-motion alignment. This method has mainly 
been used in airborne applications, specifically for the in-air 
alignment of inertial systems. It is obviously dependent on 
using additional sensors, which in this case are GPS receiver 
outputs, 1.e. position and/or velocity in the e-frame. In-motion 
alignment makes use of the fact that very accurate position 
information is available at a high data rate. It is therefore 
possible to determine accurate local-level velocities. By 
combining these velocities with the ones obtained from the 
i ; e ; 
strapdown IMU, the rotation matrix Rj can be determined. 
When implementing this approach for medium or low accuracy 
IMUs two difficulties have to be addressed. The first one is that 
cither no azimuth information at all is available, or that it is 
derived from GPS velocities and is rather inaccurate. Thus, the 
standard small-angle error models cannot be used any more 
because the azimuth error can be large. By reformulating the 
velocity error equations, one can arrive at a set of equations that 
converges quickly for azimuth, even if the initial errors are 
large. Scherzinger (1994) has given a thorough discussion of 
the problem and its solutions, based on earlier work by Benson 
(1975). A land vehicle application, using a MEMS-IMU 
integrated with GPS, is given in Shin and El-Sheimy (2004). 
Convergence is fast in this case, taking only about 50 s. This 
bodes well for airborne applications where, due to the higher 
velocities, a better signal-to-noise ratio should further improve 
the results. 
The second difficulty is more fundamental. It has to do with the 
way in which the non-linearity in equation system (2) and in the 
corresponding GPS measurements are handled. The standard 
approach is to expand the errors in position, velocity, and 
orientation into a Taylor series and to truncate the series after 
the linear term. The error equations obtained in this way are 
then cast into state vector form. By adding the linearized GPS 
measurements to the model and by representing the state 
variable distribution by a Gaussian random variable, the 
     
     
    
    
    
   
   
   
  
   
   
   
  
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.