Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
ISPRS Commission III, Vol.34, Part 3A »Photogrammetric Computer Vision“, Graz, 2002 
  
  
Figure 9: Same section as in Figure 8, but this time with 
the 2-norm instead of the truncated quadratic. 
data. An example of the advantage obtained by the trun- 
cated quadratic is illustrated in Figures 8 and 9. In order to 
further investigate the capability and effect of using differ- 
ent error functions we compared the 80% percent smallest 
residuals of the three used error functions, see Table 2. The 
underling assumption is that no more then 20% of the data 
is erroneous. It is seen, that via this assumption there is 
a considerable improvement in choosing other error func- 
tions then the 2-norm. 
  
Error Function: +>. | Res]; 
2-Norm 5.39 pixels 
Hubers M-estimator 4.12 pixels 
Truncated Quadratic 2.34 pixels 
Table 2: Comparison of the 8096 smallest residuals 
  
  
  
  
  
5.2 Error Tolerance 
To provide a rigorus experimental validation of the pro- 
posed method, a set of experiments were made by taking 
a 'good' data set and gradually degrading it with large er- 
rors at random. The used data set was the Court sequence, 
see Figure 5. Twenty features were traced by hand through 
the eight frames. The proposed algorithm with the trun- 
cated quadratic was compared to the method of Christy— 
Horaud, hereby accessing the improvements achieved. Ex- 
periments were performed with three types of errors, namely 
large Gaussian noise, missing features and swapping fea- 
tures. 
In the first experiment an increasing number of the 2D fea- 
tures were corupted by large gaussian noise. As a quality 
measure the mean error between the original non—corrupted 
data and the reprojected 3D structure was calculated, see 
Figure 10. In some cases the algorithm did not converge or 
in the Christy-Horaud case the Euclidean reconstruction 
faulted. This is illustrated in the Figures by not drawing a 
bar. It is seen that the effect of this coruption is consider- 
ably diminished with the truncated quadratic compared to 
the original method. 
  
ME Christy-Horaud 
gl [] Proposed Method 
  
  
  
01 253 45.5 5B 7 8 9 10 
Percent Errors 
Figure 10: Percentage of 2D features cor- 
rupted by Gaussian noise with variance of 
400 pixels, and the corresponding mean er- 
ror of the reconstruction. 
  
[.] Proposed Method m 
12r 
Mean Error (Pixels) 
  
  
  
  
  
a alm nlln 
D'U$S 2 3-458538 7 9 10 
Percent Errors 
Figure 11: Percentage of 2D features re- 
moved, and the corresponding mean error 
of the reconstruction. 
In the next experiemt an increasing number of 2D features 
were removed, see Figure 11. It is seen that the proposed 
approach converges and that the effect on the reconstruc- 
tion is negligible. It impossible to deal with missing data 
in the original method. 
In the last experiemnt an increasing number of 2D features 
were swaped within a given frame, Figure 12. The swap- 
ping of features is a good emulation of mismatched fea- 
tures. Again a considerable improvement is observed. 
It is seen that the proposed approach, used to implement 
the truncated quadratic as an error function, yields consid- 
erably better results. It is also noted that the higher degree 
of convergence in the proposed approach is due to the pro- 
posed approach to Eucledian reconstruction. 
5.3 Eucledian Reconstruction 
To further illustrate the benefits of the proposed method 
for Eucledian reconstruction a simulated data set was con- 
structed. Here features were swaped by the same scheme 
as in Figure 12, and the number of non-converging runs 
were counted in bins of 5, see Figure 13. The result clearly 
demonstartes the advantages of the proposed method. 
Figi 
late: 
resp 
Number of Non-Convergence 
Fig 
crez 
met 
in b
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.