Transformations (Blais, 1983). Emphasis is placed on
those areas of concern in general OLT relating to close-
range, industrial photogrammetry. These include the
issues of system response time, generation of
approximate values, the compensation for systematic
errors, blunder detection methodology and datum
establishment. A procedure is suggested for sequentially
updating the system obtained via a free-net adjustment.
2.0 SEQUENTIAL ESTIMATION FOR ON-LINE
TRIANGULATION
The process of modifying least squares computations by
updating either the normal equations matrix or its inverse
has been used in control and signal processing for some
time in the context of linear sequential filtering. Taking
into consideration the sequential nature of the
photogrammetric data collection process, the utilisation
of sequential algorithms for OLT follows naturally.
The appropriate simultaneous solution for photo-
triangulation is the well known bundle adjustment. All
data must be available prior to the adjustment.
Conversely, the sequential procedure builds the object in
a stepwise fashion, proceeding image by image (or point
by point) and incorporating data into the system as it is
collected.
The primary goal of OLT for aerial triangulation is to
provide a clean data set for a final, rigorous simultaneous
adjustment. This is achieved by accommodating blunder
detection and re-measurement quickly within the
measurement process. Observations are added to the
system as they become available and deleted or replaced
if found to be unacceptable. Sequential algorithms
enhance this process by updating the system with new
information without starting from scratch with the entire
data set. In the aerial case, blunder detection takes
precedence over the solution vector while in the VM
application, the monitoring of object point precision
throughout the measurement process assumes the
highest priority.
Notable sequential algorithms which have been examined
for OLT include the Kalman filter, which updates the
inverse of the normal equations matrix (Mikhail &
Helmering, 1973), the "Triangular Factor Update"
(Gruen, 1982) which updates the factorised normals
directly, and Givens Transformations which can be used
to update either the factorised normal equation system or
its inverse. With respect to general least-squares,
Givens Transformations possess certain advantages
over other orthogonalisation techniques such as the
Householder and Gram-Schmidt methods. (Gentlemen,
1973; George & Heath, 1980). As applied to OLT,
several studies have demonstrated the superiority of the
Givens method over the Kalman filter and "Triangular
Factor Update" algorithms (Wyatt, 1982; Runge, 1987;
Holm, 1989).
Givens Transformations are based on the use of plane
rotations to annihilate matrix elements. This approach,
compatible with the Cholesky method, provides a direct
method for solving linear least-squares problems without
134
forming the normal equations. Because all updating is
done in the factorised normals, numerical instabilities
associated with forming and solving the normal matrix
are avoided. Only one row of the design matrix is
processed at a time, making it ideal for sequentially
adding or deleting observations in an on-line
environment. If necessary, the solution vector can be
obtained at any stage of the process by back
substitution. The method can easily accomodate
weighted observations and parameters. A version of
Givens Transformations presented in Gentleman (1973)
avoids the computation of square roots, reduces the
number of required multiplications, and facilitates
weighted least-squares. This and similar “fast” recursive
algorithms are gaining favour in the area of parallel
processing due to the absence of the square root
operation (Hsieh et al, 1993). Additionally, the ability to
yield a solution in the absence of a positive definite
system may prove to be advantageous for the update of
systems encountered in the free-net adjustment of close-
range photogrammetric networks. This square root free
version of Givens is stressed in this study.
2.1 Least-Squares with Orthogonal Transformations
First, the use of orthogonal transformations for standard
least-squares estimation with the familiar Gauss-Markov
model is illustrated. Given an n x 1 observation vector /
and an m x n design matrix A such that m 2 n, the goal is
to determine the n x 1 parameter vector x in such a way
as to minimise the sum of the squares of the elements of
the m x 1 residual vector v which is defined by
y 2 Ax - I. (1)
Initially considering only unweighted observations, the
solution is given by
£=(A"A) ATI @)
This solution may be obtained with the Cholesky
factorisation ATA = U"U, where U is an upper triangular
matrix, or with the related factorisation A"A = U'DU,
where U is unit upper triangular and D is diagonal. The
only significant difference between the two is that the
former uses square roots and the latter does not.
Applying Cholesky to the normals and to the right hand
side results in the system
U“UX=b (3)
With d = (U M , the system reduces to
Ux=d (4)
which is solved by back substitution.
If the decomposition A - QR is available, where Q is an m
x n matrix with orthonormal columns and R is an nx n
upper triangular matrix, the normal equations may be
written as
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B5. Vienna 1996
R"Q" ORÈ =
Q is orthogo
nonsingular if .
Ri=0"1
Equations 4 ar
ol.
R and d are Ol
transformation
system may E
equation matri:
Extending the :
P (assuming
squares solutic
x= (AT PA)
Because P is «
P^ and the |
modified desig
Now assume
represents the
addition, delet
orthogonal tra
closely that c
observation eq
unknown para
stage k
U [x
a n7 [9s
T
Here, au, rep
observation, x
is the right ha
The total numt
series of n c
Givens Transf
Q = Q, Q,
to Eq. 8 yields
and for the rigt