Full text: XVIIIth Congress (Part B3)

  
   
  
  
   
  
  
  
  
  
  
  
   
  
  
   
  
  
   
  
  
   
  
  
  
  
  
  
  
   
  
  
  
  
  
  
  
  
  
   
  
  
  
  
  
  
   
  
  
   
  
   
  
  
  
  
  
  
  
  
   
  
  
   
  
  
  
  
   
    
point of the existing control point network (Davies et al., 1996) 
was used, in addition. This turned out to be essential for the 
accuracy of the adjusted data. Object point coordinates agree 
with those of this previous network within 2%. We obtained a 
final accuracy of 0.5 pixels for image point coordinates (a priori 
accuracy: 1 pixel), 1 km for the spacecraft position data, and 
0.1-1 mrad for the camera pointing data. 
3.1.2 Determination of conjugate image points: We 
determined conjugate image points by an adaptive least squares 
correlation algorithm (Gruen, 1985) in which the pattern of grey 
values between patches in a reference and a search image are 
compared. This algorithm was suggested to be less sensitive 
against differing pixel resolution than cross-correlation 
(Fórstner, 1995). 
An optimum patch size must be chosen. The transformation 
between the image patches assumes that the surface viewed by 
the patches is approximately planar. This suggests that the 
patch size be small, as otherwise, smoothing effects in the 
terrain model, matching failures, or in the worst case, 
topography blunders occur. On the other hand, the prevalence 
of image noise and a required minimum of texture rather 
suggest to choose large patch sizes. 
Matching of images from different spectral filters failed; 
therefore, only single-filter images were used to derive DTMs. 
3.1.3 DTM generation: The coordinates of conjugate image 
points (in terms of line, sample) in the two stereo images were 
converted to ground coordinates (in terms of x,y,z) using 
adjusted navigation data and applying the co-linearity equations 
and least squares fitting. For comparison with the 
photoclinometry models, the resulting three-dimensional cloud 
of object points was then transformed into the frame of the 
image which the photoclinometry model was based on, i.e., X,y, 
and z were converted to line, sample, and height, where height 
is measured with respect to a plane parallel to the image plane 
that contains the center of Ida. Finally, a digital terrain model 
was interpolated in image space using the "inverse distance" 
approximation. This sometimes resulted in topography gaps if 
the number of points required for the interpolation was too 
small. 
3.2 Two-dimensional photoclinometry 
3.2.1 Approach: Topographic modeling of subareas of several 
Galileo images of asteroids by photoclinometry was previously 
carried out in order to facilitate crater-depth studies (Carr et al., 
1994; Sullivan et al., 1996). The method used is the two- 
dimensional photoclinometry algorithm of Kirk (1987): The 
surface shape is parameterized with finite elements in image 
space; that is, the projection of each image pixel onto the 
surface is an "element" and the displacements, measured toward 
the camera, of the corners of the pixels are the topographic 
unknowns being solved for. Standard finite-element techniques 
are used to set up nonlinear equations relating the unknowns 
(displacements) to the knowns (pixel brightnesses) via the 
gradients of displacement and the photometric function. A 
number of numerical techniques are then used to solve these 
equations, such as iterative linearization of the nonlinear 
equations (i.e., the Newton-Raphson method), iterative solution 
of the linearized equations by the method of relaxation, as well 
as multi-gridding to speed convergence of long-wavelength 
portions of the topography. 
Practical experience indicates that considerable judgement is 
needed to determine when to change the number of relaxation 
steps before relinearizing, the over/under-relaxation parameter, 
and the conditions for changing grid resolution. All models 
shown here were therefore generated with direct supervision of 
iteration. 
a ES 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996 
    
  
As the photoclinometric equations are underdetermined, 
additional boundary conditions must be introduced. Here, we 
use a measure of the "roughness" of the surface model, (dz/dx)? 
+ (dz/dy)?, integrated over the surface, which is minimized at 
the same time that the image is modeled. The quantities in this 
squared gradient are in image coordinates; in particular, z is the 
displacement towards the camera. 
Determination of when the solution has converged adequately 
could be problematic. In the multi-resolution algorithm, an 
estimate of the trunctation error, i.e., the unavoidable error 
introduced by coarsening the resolution, is generated at each 
resolution except the finest one. The truncation error can, 
however, be extrapolated to the finest resolution based on the 
others. Iteration was continued until the residuals were less 
than the truncation error at all resolutions. 
3.2.2 Input Data: As the starting point of the iterations, a 
global model of the shape of Ida (Thomas et al., 1996) with 2 
degree resolution was used. It was also used to estimate the 
surface scattering properties, i.e., the photometric function that 
relates surface slopes to the image brightness, to be used in the 
analysis. The global shape model was shaded with photometric 
functions combining the Lommel-Seeliger (lunar) and Lambert 
functions linearly in various proportions (McEwen, 1991), and 
the proportion that best fit each observed image was adopted to 
define the model photometric function for that image. This was 
consistent with Hapke's physically based scattering model 
(Hapke, 1993) with a low single-scattering albedo across a 
range of phase angles (McEwen, 1991). These results lend 
confidence that the surface scattering properties have been 
modeled adequately. 
4. RESULTS 
4.1 Photogrammetry models 
We derived digital terrain models from two stereo pairs of 
images, respectively, in three distinct regions (termed I, II, and 
III in the following) in which photoclinometric terrain models 
were available (Fig.1). The first step of image correlation was 
carried out successfully throughout most of the study area. 
However, parts of region III show little texture and large 
distortions in the topography (see the scarp in the lower part of 
region III) which caused the matching to fail, and therefore 
resulted in gaps in the terrain model (Fig.4a). 
The effect of patch sizes of 10, 14, 18, and 22 pixels on 
resulting topography was thoroughly analyzed: While large 
patch sizes of 18 and 22 pixels resulted in smoothing effects 
(i.e. small-scale features vanished and medium-sized craters 
became flater), the topography became noisy at a patch size of 
10 pixels. We therefore selected a patch size of 14 pixels for 
the matching. t 
4.2 Comparison with photoclinometry models 
The first inspection of the terrain models (Fig. 2a, 3a, and 4a) 
shows that photogrammetry and photoclinometry reflect the 
surface features seen in the images rather differently. The 
photoclinometry models are smoothly shaped but clearly show 
craters at large and small scale. The photogrammetry models 
are rough on small scale and resolve only large and medium 
sized craters. Moreover, the large scale topography seems to 
differ in both models, especially in regions I and II. 
We attempted a more quantitative comparison between the two 
models and computed height profiles along specific image lines 
(Figs. 2b, 3b, and 4b). Apparently, height differences with 
respect to the regional trends of up to 600 meters occur (cf. Fig. 
2a). Topography of some large-scale features such as the large 
crater in Fig.3b shows striking differences. This crater is more 
246 
EEUU SA CT ET 
  
than 50 
photoclir 
are fully 
the phot: 
less than 
Fig.1: 
proced 
referen 
Only re
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.