Full text: XIXth congress (Part B7,1)

  
Fraser, Clive 
  
dependent polynomial functions. Essentially, the image is subdivided into a number of sections. At the centre scan line of 
each section there are six unknown EO elements, which are often referred to as ‘orientation images’. In the intervals 
between these reference scan lines, first-order, quadratic or higher-order polynomial functions are used to describe the 
smooth variation in sensor EO. 
This approach has been successfully employed in the triangulation of MOMS-02 3-line imagery (Kornus et al., 1995; Fraser 
& Shao, 1996), where Lagrange polynomials were used to model sensor position and attitude over the scan line interval 
between adjacent reference lines, the third-order variation function for an EO parameter being given as 
  
(t) ar P(t;) o 8 3) 
P, (t 1. 3 
à uses EA: 
E J 
where P3(#) at time f is a linear combination of P(/;) at the four adjacent orientation images. A perceived advantage of the 
Lagrange polynomial approach is that the interpolation function is dependent only upon the nearest one or two orientation 
images on each side of a given scan line. Generally speaking, the shorter the interval between reference lines, the lower the 
order of the interpolation function. There is often a balancing act required: too many reference lines lead to a less well 
conditioned solution, which may necessitate additional ground control and pass points. On the other hand, fewer orientation 
images implies that the adopted interpolation function can adequately model the dynamically changing EO of the sensor. 
Application of the multiple projection centre model, or interpolative platform model, does not require a priori knowledge of 
the satellite orbit, though provision of approximate EO parameters does aid in solution convergence. In applying this 
approach to MOMS imagery, Fraser and Shao (1996) used reference line intervals varying from 4000 to 8000, and ground 
control arrays comprising from 4 to 20 points. Accuracies attained in ground point triangulation were in the order of 0.5 to 1 
pixel, though it was noted that the method is prone to a measure of numerical instability, and is thus very sensitive to 
observational errors. 
While extrapolation of these orientation results for lower resolution satellites to the case of 1m imagery may be by no means 
a robust indicator, results obtained with the MOMS-02 and IRS-1C sensors suggest that triangulation accuracies approaching 
the 1-pixel level might well be achievable. Thus, absence of the provision of prior EO information may not constitute a 
significant impediment to attaining high accuracy results with 1m imagery. One important factor which cannot be 
overlooked, however, is that application of the collinearity equation model with multiple projection centres still requires a 
comprehensive knowledge of sensor interior orientation, though it is possible to self-calibrate the sensor provided the 
necessary (and not terribly practical) imaging geometry is in place (e.g. Ebner et al., 1992). 
4.2 Rational Functions 
As a practical means of extracting 3D information from stereo satellite imagery in the absence of either a camera model or 
EO data, a model based on ‘rational functions’ has been proposed. Rational functions are polynomial-based, empirical 
models which generally comprise terms to third order and express image coordinates as a direct function of object space 
coordinates, in much the same way as do collinearity equations. These functions, which provide a continuous mapping 
between image and object space, are given as a ratio of polynomials comprising coefficients that defy straightforward 
geometric interpretation. Indeed, it is said that one reason rational functions gained popularity for military imaging satellites 
was that the satellite orbital elements and also the EO could not be derived from the rational function coefficients. 
A general model for the rational function approach, which is appropriate for mono and stereo imaging configurations, is 
given as 
XQ — 89 + a, X s a, Ÿ sb a3Z + a4XY = asXZ + a YZ 4 a; XYZ T 2X’ wh annt zul 
1 +bjX + b,Y + b3Z + b4XY * b;XZ * b,YZ * b;XYZ * byX? + ...+ bisZ? 
(4) 
Yet e teXtoY tr oZtoXY-toXZtoYZ-toxYZt ox T. + CioZ? 
] - dX * diY + d3Z + d4XY + d5XZ + deYZ + d7XYZ + daX? + + dy9Z° 
  
456 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B7. Amsterdam 2000. 
  
€» 
A — ^S PN (S ede ped e m eed 
He hub — "luat IBN cd
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.