Full text: Proceedings, XXth congress (Part 5)

    
   
   
  
   
  
   
  
  
    
    
   
  
    
  
   
  
   
  
   
  
  
   
  
  
  
  
  
  
  
  
  
    
    
   
   
    
   
   
   
   
    
  
   
   
    
     
   
  
   
  
  
  
   
  
  
  
   
  
      
    
    
   
    
   
     
      
  
  
  
  
  
  
   
  
  
  
  
  
   
  
  
  
   
   
Istanbul 2004 
  
rtd 
1e evaluation. 
ine-based cali- 
lane-based ca- 
ox for Matlab' 
| latest version 
from the cited 
ie points were 
asta' (Kalispe- 
Tables 1-3. 
3: ON pixel) 
  
  
  
  
  
  
  
  
  
  
2 Oo 
10^) | (pixel) 
1.40 | +0.10 
1.43 | £0.10 
43 | £0.09 
.66 | £0.51 
.50 | +0.50 
49 | £0.50 
.35 | £0.99 
.23 | +0.99 
.23 | £1.00 
  
€: +6, pixel) 
  
  
  
  
  
  
  
  
  
  
K2 Oo 
10'Y | (pixel) 
.55 | £0.10 
.54 | +£0.10 
„55 | £0.10 
.37 | +0.50 
.20 | £0.49 
.20 | £0.50 
.46 | +0.97 
.28 | +0.97 
29 | 0.98 
  
€: +oy pixel) 
  
  
  
  
  
  
  
  
  
  
Ko Oo 
10!*) | (pixel) 
.50 | £0.10 
50 | «0.10 
50 | £0.10 
=50 | #03] 
.49 | £0.50 
.49 | £0.50 
.56 | +£0.99 
.5] +0.97 
S] 30.98 
il, x, and y, as 
. The symbols 
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part BS. Istanbul 2004 
CS, PB, BA stand, respectively, for calibration sphere, plane-ba- 
sed calibration and bundle adjustment. In all cases, the standard 
error of the unit weight (G,) is also given. 
Generally, it is not quite possible to directly compare different 
results for the parameters of camera calibration, as in each case 
these are correlated to different quantities. Notwithstanding this 
fact, one could claim that the results of the developed algorithm 
compare indeed satisfactorily with both plane-based calibration 
and self-calibration, i.e. robust approaches resting on object-to- 
image correspondences (it is noted, however, that the latter two 
methods yield here almost identical results, basically due to ob- 
ject planarity and lack of tie points in self-calibration). Besides, 
only one set of noisy data has been introduced in each case, a 
fact implying that the presented results reflect the effects of the 
particular perturbations. Further tests are needed to establish the 
extent to which the method being studied is susceptible to noise. 
4.2 Real data 
For the application with real data, a set of 20 images (640x480) 
was used, drawn from the cited web site of Bouguet (Fig. 6). 
Calibration parameters were computed with all three methods. 
Table 4 presents the results for the parameters, along with their 
respective estimated precision; an exception is the aspect ratio, 
for which its deviation from unity (1—a) is presented. It must be 
noted that the specific plane-based algorithm used here does not 
explicitly compute the aspect ratio; besides, is uses a different 
model for radial distortion. Hence, these parameters have been 
transformed into the framework of the other two approaches. 
  
  
Table 4. Results for the data of Bouguet (20 images) 
c 1-a Xo Yo k; k, Co 
(pixel) | (%0) | (pixel) | (pixel) | (x107) | (<10"*) (pixel) 
CS 656.34 | —2.8 1302.34 | 242.09 | —6.03 | 7.70 
| +0.241:+0.3 | +0.12| +0.23 | £0.03 | +0.25 
pg |657.35 302.92| 242.98 5921 685 |+0.13 
1034| 99 | 10.56 40.60 
BA 657.64 |-0.6 |301.48| 239.79 | -602 | 7.12 
À | +0.10|+0.1 | +0.17| 20.15 | £0.02 | £0.15 
  
  
20.12 
  
  
£0.09 
  
  
  
  
  
  
  
  
  
  
Here again, it is seen that the results of the studied approach are 
essentially comparable to those of the other two methods. How- 
ever, certain differences are evident (regarding the aspect ratio, 
for instance, or the camera constant); furthermore, the algorithm 
faced here a clear difficulty to converge. Actually, this problem 
Was attributed to a particular image (seen at the far bottom right 
in Fig. 6). This particular view is characterised by a very weak 
perspective in one direction. Indeed, its rotation about the verti- 
cal Y axis is extremely small ($ — —0.58?). The consequence is 
that the vanishing point of the horizontal X direction tends to in- 
finity (its x-coordinate is xy, = 1.6x10°). Although the algorithm 
proved to be indeed capable of handling even this unfavourable 
image geometry, exclusion of this particular image yielded the 
better results tabulated in the following Table 5. 
  
Table 5. Results for the data of Bouguet (19 images) 
c 1-a Xo Yo ki 4 k; Oo 
(pixel) | (969) | (pixel) | (pixel) | (x107) | (x10'°) (pixel) 
657.49 | -0.3 |303.43| 241.31 | -6.05 | 7.86 
  
  
  
  
C3 120271204 | +0.13| +023 |+0.03 | +027 |*0-!! 
657.29 303.25 | 242.54 
VBA 20.34| 971 2056| 2061| ^24 |. 701 [20.12 
—| 2 4 m 2 
By 1657.59 |-0.4 [302.47 [241.55 | 6.03 | 7.18 | 15 gg 
  
  
  
  
  
  
  
  
  
0.10 |+0.1 | £0.17) 50.151 1002 ] 40.17 
  
However, this example confirms that images with one (or both) 
of the vanishing points close to infinity might indeed undermine 
the adjustment. Having first estimated the initial values, a basic 
measure would be to automatically omit any image exhibiting a 
vanishing point (or a rotation $ or m) which exceeds (or, respec- 
tively, is smaller than) a suitably selected threshold. 
5. CONCLUDING REMARKS 
Recently, the authors have reported on the photogrammetric ex- 
ploitation of single uncalibrated images with one vanishing point 
for affine reconstruction (Grammatikopoulos et al., 2002), and 
on camera calibration using single images with three vanishing 
points (Grammatikopoulos et al., 2003). Here, a camera calibra- 
tion algorithm is presented for independent single images with 
two vanishing points (in orthogonal directions). A direct geo- 
metric treatment has shown that, for such images, the loci of the 
projection centres in the image systems are (semi)spheres, each 
defined by the respective pair of vanishing points. The equation 
of this ‘calibration sphere’ relates explicitly the interior orienta- 
tion parameters with the four (inhomogeneous) vanishing point 
coordinates. Actually, this is a — surely more familiar to photo- 
grammetrists — geometric (Euclidean) interpretation of the pro- 
jective geometry approaches adopted in computer vision. 
Based on this, the implemented algorithm adjusts simultaneous- 
ly all point observations on the two sets of concurring lines on 
each view. With 2 3 images, the outcome is estimations for ca- 
mera constant, principal point location and radial lens distortion 
curve; for > 3 images, image aspect ratio can also be recovered. 
The algorithm has been tested with fictitious and real data. Al- 
though further experimentation is required, these first results in- 
dicate that — in terms of accuracy and precision — the presented 
method, which adjusts observations from all available images, 
compares very satisfactorily to both plane-based calibration and 
photogrammetric bundle adjustment. 
This aspect needs to be underlined, since the latter two robust 
approaches are bound to space-to-image and/or image-to-image 
correspondences. The developed method, on the contrary, pre- 
serves all main advantages of a vanishing point based approach. 
Thus, there is no need for calibration objects or any prior metric 
information (points, lengths, analogies etc.). The mere existence 
of space lines in two orthogonal directions — a frequent appear- 
ance in a man-made environment — suffices for the calibration 
process. Evidently, this also implies that independent images (in 
principle, with identical interior orientation) of totally different 
3D or planar scenes may well be used. 
It is clear that an error analysis is needed to study the effects of 
the number of images as well as of the camera rotations relative 
to the space system axes (resulting in the position of the vanish- 
ing points on the image). The question of vanishing points tend-
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.