Full text: XVIIIth Congress (Part B3)

    
nts 
als described 
features 
als managed 
al database 
als possible, 
neg. determi- 
n 
lucials only 
  
to missing 
Istment was 
he mapping 
icy of aerial 
e problems 
the exterior 
ial triangu- 
reo plotting 
1cluded that 
orientation 
'estigated is 
act, various 
]ready exist 
95; see also 
er by Först- 
y Haumann 
ble systems 
he different 
identifying 
determining 
images can 
ive and the 
ile AAT is 
nt in image 
ve and the 
which need 
issues such 
/ith varying 
olumes etc. 
| in Schenk 
ntations of 
of indirect- 
ly determining the corresponding parameters from ima- 
gery, a few thoughts about the relevance of this task in the 
era of GPS and INS and sensor integration are in order. 
In theory, GPS and INS allow for the direct measurement 
of the exterior orientation parameters and thus render 
photogrammetric solutions for this task obsolete. The 
main issue is that ground control as such is not needed in 
the scenario of GPS/INS photogrammetry, and thus the 
resulting multi sensor data acquisition device becomes 
totally autonomous (see e.g. Ackermann 1995b for a 
discussion on the possibilities of autonomous multi sen- 
sor systems). The accuracy requirements for the orienta- 
tion parameters of various photogrammetric applica- 
tions are discussed and compared to available GPS and 
INS measurement accuracies e.g. by Schade (1994) and 
Schwarz et al. (1994). Among others Ackermann (1994), 
Burman, Torlegärd (1994) and Hothem (1995) report on 
the state of the art of aerial triangulation using GPS 
observations for the projection centres of the camera. 
Without going into detail it is concluded here that while 
the impact of GPS and INS on photogrammetric orien- 
tation is already large and still growing, photogrammetry 
without ground control is not yet a reality. However, any 
automation in image based exterior orientation proce- 
dures has to be seen and judged in the light of the 
developments in the direct measurement of the exterior 
orientation parameters using GPS and INS. 
4.1 Automatic relative orientation 
The relative orientation of two overlapping images des- 
cribes the relative position and attitude of two images 
with respect to one another. It is a 5-parameter problem. 
Given these 5 parameters all imaging rays of conjugate 
features intersect, and these intersections form the model 
surface. After having completed the interior orientation 
for both images separately, the two image coordinate 
systems are explicitly known. Therefore, relative orienta- 
tion is a non-semantic task, and arbitrary conjugate fea- 
tures can be used for the computation of the orientation 
parameters. It must only be assured that enough features 
distributed across the complete model are used. 
A general, autonomous module for relative orientation 
should be fast, accurate, robust, and reliable (see again 
chapter 1). Further it should not require any approximate 
values (in particular scale and rotation invariance should 
be available), and the approach should ideally work with 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996 
multi temporal, multi spectral, and multi sensor imagery. 
The input should only consist of the images themselves 
and the results of interior orientation, the output are the 
five orientation parameters, the three-dimensional coor- 
dinates of the conjugate features, and corresponding 
accuracy measures. 
A generic solution for autonomous relative orientation 
involves the following steps: 
- compute image pyramids for both images separately, 
- approximately determine overlap and possible rota- 
tion and scale differences between the images on the 
highest level, 
- extract features, possibly including relations, 
- match these features (and relations), 
- determine coarse orientation parameters, 
- proceed with extraction, matching, and parameter de- 
termination through the pyramid from coarse to fine 
in order to increase the accuracy of the results. 
Image pyramids should be employed to take advantage 
of the concept of hierarchy already mentioned. Note that 
the repetition of feature extraction, matching, and para- 
meter determination from one pyramid level to the next 
leads to a close integration of the coordinate measure- 
ment and the actual computations, two tasks which are 
well separated in analytical photogrammetry. In view of 
what was discussed about image matching in chapter 2 
the mentioned steps will be described in more detail. 
In order to detect overlap, rotation, and scale differences 
between the images matching primitives which are inde- 
pendent on absolute position, rotation and scale must be 
used. The cross correlation coefficient is known to be 
neither scale nor rotation invariant. Least squares mat- 
ching can't be used either, because it requires accurate 
approximate values for the unknowns which are not avai- 
lable. Thus, area based matching is not an appropriate 
method for this task. Feature based methods can be 
employed to detect rotation differences between images. 
For example, straight lines can be detected in both im- 
ages, followed by a comparison of the histograms of line 
direction. A detection of scale differences on the basis of 
the line length, however, is more problematic, because 
lines are often broken up into small pieces. Note that the 
same argument motivated the design of the line extrac- 
tion algorithm by Burns et al. (1986) now widely used in 
computer vision and photogrammetry. Rotation invari- 
303 
    
    
   
  
  
   
   
  
   
   
   
   
   
   
   
   
  
   
  
   
     
    
      
     
   
  
   
  
   
   
  
   
  
   
  
  
  
    
      
  
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.