Orientation Reference
parameters values
Approximate
values
w', ok" [gon] | 1.4239, 2.7912, 97.2435
Xà, Yo, Z6 [m] | 778.975, 849.507, 787.408 | 800.0, 800.0, 800.0
0.0, 0.0, 100.0
w^,o*,&^ [gon] | 1.8705, 3.5164, 96.9875
X6, Yo, Zà [m] | 735.269, 318.586, 784.530 | 700.0, 300.0, 800.0
0.0, 0.0, 100.0
Table 1: Reference and starting approximate values for the left and right photo.
5 EMPIRICAL TEST SETUP
5.1 Interior orientation and reference exterior
orientation
The fiducials have been measured by means of the semi-
automatic capabilities of MATCH-T. The whole procedure
takes only a few seconds. Affine transformations were used,
and the resulting oo was 1.1 um for the left image and 3.2
pm for the right image.
‘For the reference exterior orientation, only points within the
overlap area have been measured. The orientation parame-
ters have been computed by a bundle adjustment including
manual measurements of 47 manhole covers. A priori stan-
dard deviations in the adjustment were 0.07 m for horizontal
control, 0.15 m for vertical control (both corresponding to the
specified standard deviations for well-defined points), and 3
pixel = 5um for image points. The RMS residuals were 2.7
pm in image space and 0.046 m (XY) and 0.086 m (Z) in
object space.
5.2 Starting values
The approximate values for w and ¢ are set to 0 gon, & is in
this case set to 100 gon. The Xo, Yo and Zo are rounded off
to the nearest 100 m. The reference and starting approximate
values are given i table 1.
The size of the search area is 81x81 pixels in level 5, and
21x21 pixels in level 4 to 0.
5.3 Evaluation of results
Results are evaluated after the robust bundle adjustment
computation in each iteration, i.e. each level in the image
pyramid. Firstly, the resulting image orientation parameters
are compared to the reference values by a RMS calculation
on the differences (RMSD) for the rotation angles and the
position.
Secondly, the RMS residuals — resulting from inaccuracies in
the digital map and the automatic image measurements — are
computed for image space and object space (horizontal and
vertical control, respectively).
6 RESULTS
If it is possible to detect possible blunders and estimate im-
proved image orientation parameters from image points mea-
sured in level 5, with 480 um resolution, much has been
achieved. If we do not succeed in this level, we will not suc-
ceed in the higher resolution pyramid levels either.
However, a strong improvement of the orientation parameters
is reached in level 5. Because of the highly redundant system,
the robust bundle adjustment successfully detects 15% blun-
ders. A closer analysis of the blunders reveals that in case
of high vegetation, deep shadows or other kinds of heavy
636
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996
‘noise’, the algorithm may detect one of the neighbouring,
similar looking intersections. An example is shown in figure
4. |n other cases light roofs are taken for intersections.
Figure 4: During the first iteration, the algorithm mixes the
two intersections shown (the correct one at the left), due to
the large search area in the initial search, and high vegetation.
The RMS residuals are quite low in image space, considering
the 480 um resolution, and in object space very low. The
RMSD values for the orientation parameters are strongly im-
proved. Please refer to table 2 for detailed results.
As can be seen from table 2, the iterative algorithm reaches
a solution with low RMS difference and RMS residual val-
ues. There seems, however, to be some unstability in the
image level iterations, which probably stems from inappro-
priate automatic a priori standard deviations on the image
points. This is also the reason why the lowest RMS residuals
in object space are reached already in level 5(!).
From level 3 to level 2, there is no improvement on the RMS
differences, and the number of blunders rises from about 296
to 1096. The RMS residual values, however, are significantly
improved.
Comparing the figures for level 1 and level 0, no significant
improvement is obtained. This may suggest that for 1:5,000
imagery, 30 um geometric resolution has enough information
for this kind of matching, using large control structures.
One should keep in mind that there are probably systematic
errors due to light/shadow conditions on the edges of the
road intersections. This has not yet been investigated, but
clearly, the results achieved in this test should be verified
through further tests. It is evident, however, that the digital
T3 map used in this test is sufficiently accurate for exterior
orientation.
The processing time on a Silicon Graphics INDY workstation
(133MHz R4600 CPU, 48 MB RAM) for 30 road intersections
in all image levels is about 13 minutes, without any kind of
optimization. This is sufficient, e.g. for overnight automatic
Table
param
object
orient:
If the
the mi
to ma
ping.
match
covers
izatior
like GI
The p
plied :
graphi
are su
Proble
tersec
OWS Ir
there :
map d
For ru
1:25,0!
[Acker
Phc
Phc
115
[Fórsti
Det
Poii
togi
B11
[Gülch
trac
trol
in:
tog
333
[Heikk
Bas
PRS
PP-