Full text: Proceedings, XXth congress (Part 1)

   
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B1. Istanbul 2004 
  
  
  
Figure 1. SPOT 5 scene of Melbourne, Australia 
The affine projective model is the simplest method of relating 
image space coordinates to object space coordinates without 
any knowledge of the sensor model or the exterior orientation 
of the sensor. Early research was carried out using moderate 
resolution satellite sensors such as SPOT and MOMS (Okamoto 
et al., 1999; Hattori et al., 2000), but more recently it has been 
successfully applied to high resolution satellite imagery, 
specifically IKONOS (Yamakawa et al., 2002, Fraser and 
Yamakawa, 2004, Hanley et al., 2002). Although it requires 
only a modest number of ground control points (GCPs), the 
affine model has been shown to produce results to sub-pixel 
accuracy. The general form of the model describing an affine 
transformation from 3D object space (X, Y. Z) to 2D image space 
(x, y) for a given point i is expressed as: 
= 
Il 
Xr glo Agel i 
yi = As X; zi As Y; SE AZ; = Ag 
where x, y are image space coordinates; X, Y, Z are object space 
coordinates; and A; to Ag are the eight affine parameters. 
These eight parameters per image account for translation, 
rotation, and non-uniform scaling and skew distortion. Implicit 
in Equation 1 are two projections, one scaled-orthogonal and 
the other skew-parallel. In the reported implementation of the 
affine projective model, all model parameters are recovered 
simultaneously along with triangulated ground point 
coordinates in a process analogous to photogrammetric bundle 
adjustment. 
The affine model assumes, firstly, that the projection from 
object space to image space is an affine projection and, 
secondly, that lines of acquired image data are parallel to each 
other. The first assumption holds true for high-resolution 
satellite imaging sensors which have a very narrow field of 
view of around 2° or less. Previous studies have shown the 
assumption of parallel rather than perspective projection to be 
  
sufficiently valid. The second assumption is true if the satellite 
travels in a straight line, parallel to the ground during image 
acquisition. Thus, the Universal Transverse Mercator projection 
(UTM zone 55) was employed as the object space reference 
coordinate system in the reported investigation, since the 
assumption of a straight line track for the satellite, parallel to 
the ‘XY plane’, is sufficiently valid within this projection 
coordinate framework. 
4. IMAGE MATCHING 
The matching methodology implemented in this study 
combines image space matching with an object space geometric 
constraint, namely the affine projective model. Usually image 
space matching uses a geometric constraint in image space, 
such as epipolar geometry, to constrain the matching process. 
The constraint is necessary to reduce the search space, which in 
turn reduces processing time, as well as reducing the likelihood 
of erroneous matches. The use of an object space geometric 
constraint replaces the need for the epipolar constraint. 
Matching points using geometric constraints in object space 
rather than image space is simply another way of describing the 
search for an unknown height value by moving along an image 
nadir line until a highly correlated match of image pixels is 
found. This method of matching has previously been described 
by Benard (1984), and subsequently incorporated into many 
object space matching processes (Helava, 1988; Ebner and 
Heipke, 1988; Gruen and Baltsavias, 1986). The method works 
by taking an object space point (Xy, Yo Zo) and projecting it, 
using the affine projective model, into the image spaces of the 
images being matched: (x;, yi) for image | and (x», v?) for 
image 2. These two image points are then matched, in image 
space, using a typical intensity-based matching strategy. The 
similarity measure (in this case the cross-correlation 
coefficient) for the match is recorded. A new object space point 
(X Yo. Zo dZ) is then transformed into image coordinates and 
matched as before. Once again the similarity measure is 
recorded. The process is repeated for all values of Z; between 
the lower and upper limits of Z. The value of Z; which 
corresponds to the greatest similarity measure is taken as the 
determined height at the point (X. Yo). The process is repeated 
for all (X; Y)). 
The similarity measure implemented in this matching strategy 
to compare conjugate points was the cross-correlation 
coefficient, y, given by (Gonzalez and Woods, 1992): 
O ys 
y y)- 
OM4O5 
where oy and os are the standard deviations of the master and 
slave chips being matched, and o is the covariance of the 
intersection of the master chip with the slave chip. 
Since the matching strategy in this study is driven by an object 
space geometric constraint, points have to be initially selected 
in object space before being transformed into image space and 
matched. Therefore, in order to generate the candidate matching 
points, a grid of three dimensional object space points covering 
the area of interest was created. These points were then 
sequentially transformed into image space coordinates and 
matched according to the method described above. 
    
  
  
  
  
  
  
  
  
  
  
   
  
   
  
  
  
  
  
  
  
   
  
  
  
   
  
   
   
   
   
   
    
  
   
  
  
  
  
    
  
  
   
  
  
   
    
   
    
  
  
   
   
   
  
   
  
   
  
   
  
   
     
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.