Full text: Proceedings, XXth congress (Part 3)

       
    
      
    
      
    
   
   
  
      
V, Part B3. Istanbul 2004 
s distance 
ION 
of minimizing the cost 
n-linear function. 
gorithm. Should robust- 
be easely transformed in 
It is 
International Archives of the Photogrammetry, Remote Sensing 
3.2 Jacobian approximation 
In most applications, H( X) and g( X) can be complex to com- 
pute, so a first order approximation is performed: 
H(X) z J'X)J(X) (10) 
where J is the Jacobian matrix : 
A. On(X) rope) 
Jj(X)9 ——z———— 
X) OX, 
OX; Ci 
with à € [L, No], j € [1, N]. (11) 
Using the same approximation, § is computed using : 
g(X) e J' GO RD) (12) 
This problem is equivalent to solving the linear least square prob- 
lem : 
(X) = (X) (13) 
The system can be expressed using the usual matrix form : 
[rr] [aX] » prz] (14) 
Seeing that the matrix is symmetric and positive, the system can 
be solved using the Cholevsky algorithm. 
hm. Constraints will be 
ative algorithm requiring 
CX is supposed to be 
e Introduction). At each 
bus XU. dX. Let 
on F(X) : 
17, j € [1, Na] 
i € [1, N,] 
3.3 Constraints 
At the beginning of this section, we did not take the constraints 
into account. So, even if at step k, X) satisfies the constraints, 
X613 — XO FX, with dX solution of equation 14 has no 
reason to satisfy the constraints. 
The first obvious solution to this problem is to modify each 
X®+1 in order to satisfy the constraints (by renormalization or 
projection on the constraint space). This is an easy and direct 
solution, but, the theory does not guarantee the convergence. In 
practice, the convergence is slow. An alternative solution is to 
introduce the constraints in the resolution process (see (Triggs et 
(6) al., 2000). 
Equation 5 can be solved using Lagrange multipliers. Let À € 
R™ be the vector of Lagrange multipliers. Let the first order 
expansion of the constraint vector be &(X +d X) a EXIT, 
where C is the matrix of the constraints gradients : 
(7) Be: 
Cum x with i € [1, N.], j € [1, N,] (15) 
zkj 
mizing cost function F. 
) + n (X) ax 
algorithm is : 
sro) 
  
Solving equation 5 is equivalent to finding X +dX that optimizes 
F + EX, subject to & = 0. The first order expansion of both 
expressions are : 
(8) 
  
0= E (F (X «ax +é(X +dX) X) 
e g (X) - B (X) aX « c' (3)X de 
(9) and = = 
0- e(X e ax) = c(X) + cd (17) 
Combining these two equations gives the expression for each step 
of the algorithm : 
H Ct G X A j (18) 
C0 13 © | 
  
and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
Using the same approximation as in section 3.2. the system be- 
comes : 
ry €]]ax]- peg 3 
€ ud] lx =ra (19) 
The main difference between this system and equation 14 is that 
the matrix is not positive anymore. So, it cannot be solved using 
the Cholevsky algorithm. Other algorithms such as LU. QR or 
Householder transform can be used. We generally use the LU 
transform. 
4. ACCURACY 
This section deals with error propagation in the bundle adjust- 
ment problem. It enables us to assess the accuracy of all recon- 
structed parameters. 
4.1 Accuracy prevision 
It is well known that, if observed values v?^* have only a Gaus- 
sian noise, vector X which minimizes F(X) is also the maxi- 
mum likelihood vector. Let V be the variance matrix : 
No =F [(%: = E(X:)) * (X E E(X;))] 
with ;, j € [1,N,] (20) 
where E(x) denotes the expectation of variable x. 
For X minimum of F, the inverse of the Hessian matrix HH) 
is equal to the variance matrix V. Some justifications can be 
found in (Hartley and Zisserman, 2002) or (Forstner, 2004). 
So, it is possible to know the accuracy and the correlation be- 
tween unknowns just by inversing the Hessian matrix. This re- 
sults is confirmed by experimental results (see section 5.). 
A correlation matrix can also be computed : 
Viz 
esi withi, Fe {1,N.] (21) 
v Vii * Vi 
Each matrix element Corr;, j gives the correlation (between -1 an 
+1) between two parameters X; and X. 
Corr; ; = 
4.2 Camera position Accuracy 
We denote by p the estimation of the camera position after the 
bundle adjustment algorithm, and by V;, the 3 x 3 sub matrix 
of V corresponding to variance-covariance matrix of the camera 
position is extracted. Error propagation follows: 
P ~ N($ Vi) (22) 
Hence, 
(P-pyv;!i(P-5) ~ xX(3) (23) 
Now, given a significance number œ, we can associate to the es- 
timated camera position a covariance ellipsoid £(p, a): 
PNA) « ea (24) 
With x?(3, a) representing the o^ quantile of a chi-square law 
with 3 degrees of freedom. To get the axes length of the ellip- 
soid, standard eigenvector decomposition of the matrix V; is per- 
formed. 
     
   
    
      
     
    
    
   
       
   
    
  
    
   
     
     
   
    
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.