Full text: XVIIth ISPRS Congress (Part B5)

      
      
    
   
      
      
   
   
     
    
   
     
    
  
  
   
    
   
    
   
   
    
   
     
    
    
   
    
    
  
  
    
     
  
    
    
   
    
  
   
     
    
  
    
    
  
     
       
     
   
   
    
     
mage is 
detected 
n object 
, each 
ves two 
h these 
ines the 
improve 
leficient 
^ state 
to be 
> makes 
egy in 
oved by 
elow: 
and its 
case of 
)btained 
object 
5. the 
= 
2 using 
e will 
> for R 
. State 
filtered 
s, with 
rix) is 
a new 
er than 
space 
pace is 
matrix 
Las a 
and a 
can be 
> been 
le next 
vailable 
ase of 
ator is 
ate are 
1 error 
joints 
object 
by a 
| state 
> 
ach is 
image 
search 
DCESS 
1 of 
specific 
rational 
'ommon 
tection 
used, 
he 2D 
image 
l cost; 
  
  
  
  
  
  
  
  
  
  
  
  
thus, so the reduction of the image search space is a 
way to increase the performance and efficiency of 
Vision Systems. al 
Estimates of camera stat 
tate x elk and Pan can 
be used to predict the feature position in image space 
and the search limited to a window around this 
position. The prediction of the feature position in the 
image space can be done projecting object entities into 
image space using collinearity equations or even the 
measurement model defined in this paper. Prediction of 
any image feature position is bounded only by the 
quality of the state estimates. This way the window 
must be defined taking into account the covariance 
matrix of the predicted state estimate and the feature 
dimensions. 
A recursive procedure in which sequential 
estimates are used to reduce the feature search space 
is depicted in Figure 5.3.1. 
  
[predicted estimate | 
V 
  
——>| SEARCH FOR THE i-th LINE 
window definition; 
Hough space computation; 
line location; 
[i=i+1] T 
| filtered estimate computation 
based on the i-th line; 
J 
| final filtered estimate | 
vb 
  
  
  
  
  
  
  
  
  
Figure 5.3.1 Recursive search procedure. 
Other advantages of using predicted estimates for 
image features search are: 
the estimated feature length can be used to locate 
the most probable cluster in Hough space. 
fewer candidate clusters will be present in the 
Hough space. 
6. RESULTS 
6.1 Introduction 
In the previous sections mathematical expressions 
relating straight features in object and image space 
and its treatment using Kalman filtering were 
presented. In this section results obtained from 
simulated data are presented and discussed. 
Camera inner parameters (focal length, principal 
point, optical distortion) were supposed known and it 
was also assumed no movement of robot or object during 
image acquisition. The following parameters for the 
simulated camera were used: 15mm focal length, 10x10 
mm’ imaging area and 10x10 um pixel size. 
Once the exterior camera parameters (position and 
orientation) are established the image coordinates of 
object points (corners) were | computed using 
collinearity equations. Random errors were introduced 
in these points which represent endpoints of a straight 
line; image line equations were finally computed from 
these pair of points. 
Three sets of data became available: 
- object straight lines parametric equations, assumed 
to be known from the object model; 
- exterior camera parameters (camera vector state) and 
the associated error matrix; 
- image lines equations and their covariance matrix. 
The covariance matrix of image lines were computed 
using covariance propagation. 
6.2 Single Frame Calibration 
It was supposed we had a single camera, static in 
space, observing a cube of 70mm. The base frame is 
coincident with the station frame and the object frame 
is 200mm far from the origin of the base frame. For the 
camera vector state of Table 6.2.1 the resulting 
  
simulated image is presented in Figure 6.2.1. 
Table 6.2.1 True Camera State and Predicted Values 
  
  
  
Camera Predicted State| Predicted 
State State Error | Variance 
2 
K 0.0 0.02 0.02 (0.02) 
9 rad| 0.0 -0.02 -0.02 (0.02) 
w 0.959931 0.939931 0.02 (0.02) 
Xc 230 225 5 (5.) 
Yc mm| -200 -204 4 (3.3 
Zc 200 205 -5 (5.* 
  
  
  
  
  
  
  
  
  
  
Figure 6.2.1 Simulated image of a cube. 
The wireframe cube shown in Figure 6.2.1 can be 
described by twelve lines in image space, the 
correspondent object lines of which in base coordinates 
are known. Using the recursive approach stated in 
previous sections estimates for the camera vector are 
obtained. In Figure 6.2.2. graphics are presented 
showing the true errors and estimated standard 
deviations for rotation and translation variables. The 
true error is defined as the difference between the 
estimated and the true parameter value and the standard 
deviation is defined as the square root of the 
estimated variance. 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
ROTATIONS 
True Errors (rai Lines 
0.019 + 
0,00 | i 
a / Bg s 
me lg EE ETES nS a 
o : "d = rer put 
t ‘ 
$7, 
-0.00 bh, ^ 
T Lu 
-0.018 | / 
Estinatad Standard Deviations 
9.015 |-* 
po^. 
o0 Li: XM 
X * 
s A. 
di 
. M 
ii 
EXTA OE. tte. 
0 i 1 E 
TRANSLATIONS 
True Errors (re Lines 
40 | > 
zc 
2.0 | X M me mra mama 
^ ^ Mn 
L 
9 S chum 
A£0Ll mU eet 
-a.0 | „X Ye 
Estinated $tandard Deviations 
ues 
4.0 | ae 
Ma 
v 
"v 
“> 
2.0 | He 
nari. s 
SH — = 
0 L L L I 
  
  
  
  
  
  
  
Figure 6.2.2 Error analysis in single frame calibration 
From the analysis of the graphics in Figure 6.2.2 
we can conclude: 
the filter has a strong convergence over the twelve 
lines. In fact, when the ninth feature was
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.