all updating is
cal instabilities
normal matrix
sign matrix is
or sequentially
an on-line
vector can be
ess by back
accomodate
A version of
ntleman (1973)
, reduces the
and facilitates
"fast" recursive
ea of parallel
> square root
y, the ability to
ositive definite
r the update of
iment of close-
quare root free
isformations
is for standard
Gauss-Markov
rvation vector /
> n, the goal is
in such a way
he elements of
| by
(1)
servations, the
(2)
the Cholesky
pper triangular
A'A = U'DU,
diagonal. The
wo is that the
S not.
the right hand
(3)
(4)
here Q is an m
Risannxn
ations may be
R'Q'QR$ - R'Q'I. (5)
Because H is
Q is orthogonal hence OO = L
nonsingular if AlAis, Eq. 5 reduces to
Ré sl. (6)
Equations 4 and 6 are then equivalent with U = R and d =
ol.
R and d are obtained by applying a series of orthogonal
transformations to A and I. Thus the solution to this
system may be determined without forming the normal
equation matrix directly.
Extending the system with an observational weight matrix
P (assuming uncorrelated observations) the least-
squares solution is given by
X-(ATPA) ! A! PI. (7)
Because P is diagonal, A can be simply premultiplied by
P^ and the QR decomposition then applied to this
modified design matrix.
Now assume a sequential process where Eq. 4
represents the reduced system at stage k - 1. The
addition, deletion, or replacement of observations via
orthogonal transformations is shown below and follows
closely that of Gruen (1985). The addition of one
observation equation to stage k - 1, including a set of new
unknown parameters, results in the following form at
stage k
U|x| d
a x: Um L (8)
Here, a, represents the coefficient vector of the added
observation, x’ is the new p x 1 parameter vector, and /,
is the right hand side of the new observation equation.
The total number of system parameters is n. Applying a
series of n orthogonal transformations (in our case
Givens Transformations)
Q= 0,0, Q. (9)
to Eq. 8 yields
Q 50::9]Tp. oom 2 (10)
and for the right hand side,
Q 0 Ip = : (11)
Jom Las 101
Zeros in Eqs. 10 and 11 show that when new parameters
are introduced, the rows and columns of the existing U
matrix and the existing d vector must be extended with
zeros.
Finally, the solution vector for the updated system is
obtained by back substitution into
U| * |2d. (12)
2.2 Givens Transformations
To illustrate the use of Givens Transformations for the
addition of one observation into an existing system, an
expanded form of Eq. 8 is shown in Figure 1.
Uy, Uy 4g c H4 À
Uy Wa o Uy, d,
Uzz + Un À
Un d,
Q
a a, dy tis, d, l
Figure 1: U matrix augmented by new coefficient
vector
Consider one row vector from the system Ux - d and a
coefficient vector from the system Ax = /,
Oe Ou ty +o Uy oo
13
0. Oa, a,, + a (13)
One Givens Transformation replaces these two vectors
with
’ ’ ,
0 soe 0 u; Ui ..>. Uu. eo.
(14)
’ ’
0-00 al, aj +
where
u, = cu, + sa,
a, =—su, +ca, (15)
c+s°=1
To annihilate a; to zero, the rotation parameters are
computed from the diagonal elements of U and the
135
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B5. Vienna 1996