‘Not
top
gorithm.
id the data,
the Frobe-
1e squared
e error be-
onding re-
B G
ind P, re-
P can be
D, of S.
A:
(6)
e transfor-
ean recon-
°h that the
| as possi-
rary noise
hristy and
'es a solu-
ie pinhole
e observa-
served by
equires an
e 1. The
| be com-
(8)
ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision", Graz, 2002
where V; is an 2k x 2k weighting matrix representing the
weights of the j^^ column of S. In the case of Gaussian
noise Vj Vj is the covariance structure of 5; and (8) is
equivalent to minimizing the Mahalanobis distance.
3.1 Separation with Weights
The solution to (8) is M. and P given S and V;. Note
that a SVD can not be applied as for (5). To solve (8), a
method similar to the idea in the Christy-Horaud factor-
ization algorithm [Christy and Horaud, 1996] is proposed.
This method is generally known as surrogate modeling, see
e.g. [Booker et al., 1999]. Surrogate modeling works by
applying a computationally 'simpler' model to iteratively
approximate the original 'hard" problem.
The best known example of surrogate modeling is probably
the Newton optimization method. Here a 2"? order poly-
nomial is approximated to the objective function in each
iteration and a temporary optimum is achieved. This tem-
porary optimum is then used to make a new 2" order ap-
proximation, and thus a new temporary optimum. This is
continued until convergence is achieved.
Here (5) is used to iteratively approximate (8) getting a
temporary optimum, which in turn can be used to make a
new approximation. The approximation is performed by
modifying the original data, S, such that the solution to
(5) with the modified data, S, is the same as (8) with the
original data. By letting ^ denoting modified data, the goal
is to obtain:
min». IV;(S; — MP;)||3 = ©)
=
min >, IS; — MP;
j=1
where N = [N; ...N,] denotes the residuals:
N; = S; - MP;
hereby the subspace, M, can be found via SVD and P via
the normal equations once M is known. Let q denote the
iteration number, then the algorithm goes as follows:
1. Initialize $9 = S, q = 1.
2. Estimate Model Get M? by the singular vectors cor-
responding to the three largest singular values of S471,
via SVD. Get P? from
=1
vj: Pre[weVTVvawe| MUVIV,S,
3. Calculate Residuals N? = S - M? P?
5j
Nl
MY
q
M? P
Modify Data
5j
Figure 2: A geometric illustration of how the data is mod-
ified in steps 3. and 4. of the proposed algorithm for sepa-
ration with weights.
4. Modify Data
Nj: NS V;N;
S9 — MIP!+N7
Il
5. If Not Stop q = q + 1, goto 2. The stop criteria is
||N* — N*?^!|l;5 « tolerance
As illustrated in Figure 2 the data, S;, is modified such
that the Frobenius norm of the modified residuals, NY, are
equal to norm of the original residuals, NŸ, in the norm
induced by the weights, V ;. The last part of step 2. ensures
that the residual, IN, is orthogonal to M in the induced
norm, since M?P$ is the projection of S; onto M7 in the
induced norm.
The reason this approach is used, and not a quasi-Newton
method,e.g. BFGS [Fletcher, 1987] on (8), is that faster
and more reliable results are obtained. In part because that
with ’standard’ optimization methods the problem is very
likely to become ill-conditioned due to the potentially large
differences in weights.
To illustrate this, some test runs were made, comparing the
computation time needed to solve some 'typical' problems,
see Table 1. The S matrix was formed by (3) where to
noise was added from a compound Gaussian distribution.
The compound distribution was formed by two Gaussian
distributions, one with a standard deviation 10 times larger
than the other. The frequency of the larger varying Gaus-
sian is the Noise Ratio. It is seen, that the proposed method
performs better than BFGS, and that the BFGS approach
did not converge for S = 40 x 40 and Noise Ratio=0.5.
3.2 Arbitrary Error Functions
When dealing with erroneous data, robust statistical norms
or error functions become interesting, see e.g. [Black and
A-17